大数据存储技术之ClickHouse入门学习(一)

目录

一、ClickHouse介绍

二、ClickHouse准备

三、ClickHouse安装

四、ClickHouse命令

五、ClickHouse集群

六、ClickHouseSQL

七、ClickHouse可视化

八、ClickHouse卸载


ClickHouse入门学习(一):https://blog.csdn.net/qq262593421/article/details/119514836

ClickHouse入门学习(二):https://blog.csdn.net/qq262593421/article/details/120293618

一、ClickHouse介绍

        ClickHouse是俄罗斯搜索巨头公司Yandex于2016年开源的一个用于联机分析(OLAP)的完全列式数据库管理系统(DBMS) , 主要用于在线分析处理查询(OLAP),能够使用SQL查询实时生成分析数据报告。 ClickHouse允许在运行时创建表和数据库,加载数据和运行查询,而无需重新配置和重新启动服务器,支持线性扩展,简单方便,高可靠性,容错。

ClickHouse官网:ClickHouse - fast open-source OLAP DBMS

ClickHouse文档:https://clickhouse.tech/docs/zh

ClickHouse社区:http://www.clickhouse.com.cn

ClickHouse在线demo:ClickHouse Playground

ClickHouse下载(tgz包):https://repo.clickhouse.tech/tgz/stable

ClickHouse下载(rpm包):Index of /clickhouse/rpm/stable/x86_64/ 

ClickHouse下载(rpm包):Altinity/clickhouse - Results in Altinity/clickhouse

ClickHouse下载(github):Tags · ClickHouse/ClickHouse · GitHub

ClickHouse学习:http://www.hellow3.com/clickhouse/zh

ClickHouse学习:ClickHouse · ClickHouse · 看云

ClickHouse入门:ClickHouse文档 | ClickHouse文档

ClickHouse阿里云社区:阿里云开发者社区-ClickHouse 技术-ClickHouse 技术开发者社区

ClickHouse常见问题:常见问题排查 - 云数据库 ClickHouse - 阿里云

ClickHouse书栈网:示例数据集 - Yandex.Metrica - 《ClickHouse v20.3 使用教程》 - 书栈网 · BookStack

二、ClickHouse准备

1、验证SSE 4.2指令集

grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"

SSE 4.2 supported 

2、取消打开文件数限制

vim /etc/security/limits.conf
vim /etc/security/limits.d/20-nproc.conf

 在 limits.conf 和 20-nproc.conf 文件末尾添加一下代码

* soft nofile 65536 
* hard nofile 65536 
* soft nproc 131072 
* hard nproc 131072

查看修改 

ulimit -a
cat /etc/security/limits.conf
cat /etc/security/limits.d/20-nproc.conf

 3、取消SELINUX

Security-Enhanced Linux(SELINUX)是提供访问控制安全策略的机制或安全模块,用于将用户限制为系统管理员设置的某些政策和规则。

vim /etc/selinux/config
SELINUX=disabled
cat /etc/selinux/config

 4、关闭防火墙

service iptables stop
service ip6tables stop

 5、安装相关依赖

yum -y install libtool
yum -y install *unixODBC*

6、安装Zookeeper

大数据高可用技术之zookeeper3.4.5安装配置

三、ClickHouse安装

官网安装部署:安装部署 | ClickHouse文档

Altinity安装部署:https://github.com/Altinity/clickhouse-rpm-install

看云安装部署:1.2ClickHouse单机安装 · ClickHouse · 看云

社区单机部署:CentOS7.5 安装 ClickHouse 20.8.3.18单机版 - clickhouseclub

社区源码部署:clickhouse 在centos7.4 编译 - clickhouseclub

社区集群部署:ClickHouse集群搭建从0到1 - clickhouseclub

win10-Docker部署:Windows下Docker安装ClickHouse - 云+社区 - 腾讯云

 1、RPM在线安装

ClickHouse下载(rpm包):Index of /clickhouse/rpm/stable/x86_64/  

sudo yum -y install yum-utils
sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/clickhouse.repo
sudo yum -y install clickhouse-server 
sudo yum -y install clickhouse-client

  2、TGZ在线安装(推荐)

ClickHouse下载(tgz包):https://repo.clickhouse.tech/tgz/stable 

将ClickHouse的最新版本赋给变量LATEST_VERSION,但这里发现21.10.1.8013版本还没有提供

export LATEST_VERSION=`curl https://api.github.com/repos/ClickHouse/ClickHouse/tags 2>/dev/null | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -n 1`

所有这里手动指定ClickHouse的版本为21.9.2.17(版本号去官网查)

生产环境建议指定最新的稳定版本,版本查看地址:Tags · ClickHouse/ClickHouse · GitHub

export LATEST_VERSION=21.9.2.17
curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.tech/tgz/clickhouse-server-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.tech/tgz/clickhouse-client-$LATEST_VERSION.tgz

tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz
sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh

tar -xzvf clickhouse-common-static-dbg-$LATEST_VERSION.tgz
sudo clickhouse-common-static-dbg-$LATEST_VERSION/install/doinst.sh

tar -xzvf clickhouse-server-$LATEST_VERSION.tgz
sudo clickhouse-server-$LATEST_VERSION/install/doinst.sh
sudo /etc/init.d/clickhouse-server start



tar -xzvf clickhouse-client-$LATEST_VERSION.tgz
sudo clickhouse-client-$LATEST_VERSION/install/doinst.sh

 解压时发现官网提供的tgz包有问题(真坑) ,只能手动下载再上传安装了

Linux tgz package clickhouse-client-21.9.2.17.tgz

Linux tgz package clickhouse-common-static-21.9.2.17.tgz

Linux tgz package clickhouse-common-static-dbg-21.9.2.17.tgz

Linux tgz package clickhouse-server-21.9.2.17.tgz

Linux tgz package clickhouse-test-21.9.2.17.tgz

tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz
sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh

tar -xzvf clickhouse-common-static-dbg-$LATEST_VERSION.tgz
sudo clickhouse-common-static-dbg-$LATEST_VERSION/install/doinst.sh

tar -xzvf clickhouse-server-$LATEST_VERSION.tgz
sudo clickhouse-server-$LATEST_VERSION/install/doinst.sh
sudo /etc/init.d/clickhouse-server start


tar -xzvf clickhouse-client-$LATEST_VERSION.tgz
sudo clickhouse-client-$LATEST_VERSION/install/doinst.sh

  3、RPM离线安装

mkdir -p /home/software/clickhouse
cd /home/software/clickhouse
wget http://repo.red-soft.biz/repos/clickhouse/stable/el7/clickhouse-client-1.1.54236-4.el7.x86_64.rpm
wget http://repo.red-soft.biz/repos/clickhouse/stable/el7/clickhouse-server-common-1.1.54236-4.el7.x86_64.rpm
wget http://repo.red-soft.biz/repos/clickhouse/stable/el7/clickhouse-compressor-1.1.54236-4.el7.x86_64.rpm
wget http://repo.red-soft.biz/repos/clickhouse/stable/el7/clickhouse-debuginfo-1.1.54236-4.el7.x86_64.rpm
wget http://repo.red-soft.biz/repos/clickhouse/stable/el7/clickhouse-server-1.1.54236-4.el7.x86_64.rpm
rpm -qa | grep clickhouse
rpm -Uvh *.rpm
rpm -qa | grep clickhouse

nohup clickhouse-server --config-file=/etc/clickhouse-server/config.xml  >null 2>&1 &

四、ClickHouse命令

centos7以上的系统只有 systemctl 命令有效

 1、启动clickhouse服务

sudo /etc/init.d/clickhouse-server start
service clickhouse-server start
systemctl start clickhouse-server.service

  2、关闭clickhouse服务

sudo /etc/init.d/clickhouse-server stop
service clickhouse-server stop
systemctl stop clickhouse-server.service

 3、重启clickhouse服务

sudo /etc/init.d/clickhouse-server restart
service clickhouse-server restart
systemctl restart clickhouse-server.service

 4、查看clickhouse服务

sudo /etc/init.d/clickhouse-server status 
service clickhouse-server status 
systemctl status clickhouse-server.service

 5、启动clickhouse客户端

clickhouse-client

 6、查看clickhouse进程

ps -ef | grep clickhouse

 7、停止clickhouse相关服务

ps -ef | grep clickhouse | grep -v grep | awk '{print $2}' | xargs kill -9

 8、查看clickhouse正常日志

tail -n 300 /var/log/clickhouse-server/clickhouse-server.log

 9、查看clickhouse错误日志

tail -n 300 /var/log/clickhouse-server/clickhouse-server.err.log

10、关闭clickhouse开机自启(非生产环境)

sudo systemctl disable clickhouse-server

11、查看clickhouse集群配置 

clickhouse-client -u default --password "" --query "SELECT * FROM system.clusters"

12、查看clickhouse所有表

echo "SELECT database,name,engine FROM system.tables WHERE database != 'system'" | clickhouse-client

13、查看系统用户内存用量

ps aux | tail -n +2 | awk '{ printf("%s\t%s\n", $1, $4) }' | clickhouse local -S "user String, memory Float64" -q "SELECT user, round(sum(memory), 2) as memoryTotal FROM table GROUP BY user ORDER BY memoryTotal DESC FORMAT Pretty"

14、查询clickhouse执行指标

echo "SELECT * FROM system.numbers LIMIT 1000" | clickhouse-benchmark -i 5 -h localhost -h localhost 

15、修改clickhouse用户密码

①方法一:配置 /etc/clickhouse-server/users.xml 文件

vim /etc/clickhouse-server/users.xml
<password>123456</password>

 ②方法二:配置 /etc/clickhouse-client/config.xml 文件

vim /etc/clickhouse-client/config.xml
<config>
    <user>username</user>
    <password>password</password>
    <secure>False</secure>
</config>

五、ClickHouse集群

 ClickHouse集群部署:使用教程 | ClickHouse文档

 ClickHouse集群配置:https://clickhouse.tech/docs/zh/operations/configuration-files

 ClickHouse副本引擎:数据副本 | ClickHouse文档

 ClickHouse分布式配置:分布 | ClickHouse文档

 该集群配置为分片副本集群, ClickHouse只有 MergeTree 系列里的表可支持副本

 副本配置提供高可用,分片配置提供数据的横向扩展和容灾

 ClickHouse在单个节点创建表,表只会创建在单个节点上。如果想要使用复制表,

 在建表时必须指定带 Replicated 前缀的复制表引擎,然后在每个节点上创建相同表

 副本只能同步数据,不能同步表结构,所以我们需要在每台机器上自己手动建表 

 每台机器都相同的配置文件:/etc/clickhouse-server/config.xml(不引入外部metrika.xml不相同)

 每台机器不相同的配置文件:/etc/metrika.xml

1、 ClickHouse目录文件介绍

ClickHouse目录文件
数据存储目录/var/lib/clickhouse
日志存储目录/var/log/clickhouse-server
默认分片集群配置/etc/metrika.xml
服务器配置文件/etc/clickhouse-server/config.xml
客户端配置文件/etc/clickhouse-client/config.xml
定时任务配置/etc/cron.d/clickhouse-server
系统服务配置文件/etc/systemd/system/clickhouse-server.service
文件句柄数量配置/etc/security/limits.d/clickhouse.conf
主程序可执行文件/var/lib/clickhouse
客户端连接可执行文件/usr/bin/clickhouse-client
服务端可执行文件/usr/bin/clickhouse-server
数据压缩可执行文件/usr/bin/clickhouse-compressor
服务器正常日志文件/var/log/clickhouse-server/clickhouse-server.log
服务端错误日志文件/var/log/clickhouse-server/clickhouse-server.err.log

2、ClickHouse集群规划 

ClickHouse集群规划
zookeeperclickhouse分片副本
hadoop001shard01replica_01_02
hadoop002shard02replica_02_02
hadoop003shard03replica_03_02
hadoop004shard01replica_01_01
hadoop005shard02replica_02_01
hadoop006shard03replica_03_01

3、ClickHouse核心配置

cp /etc/clickhouse-server/config.xml /etc/clickhouse-server/config.xml.init
chmod 664 /etc/clickhouse-server/config.xml
chown -R clickhouse:clickhouse /etc/clickhouse-server
vim /etc/clickhouse-server/config.xml
<listen_host>0.0.0.0</listen_host>
<!-- <listen_host>::</listen_host> -->
<!-- <listen_host>::1</listen_host> -->
<!-- <listen_host>127.0.0.1</listen_host> -->
<!-- Path to data directory, with trailing slash. -->
<path>/home/clickhouse/data/</path>

<!-- Path to temporary data for processing hard queries. -->
<tmp_path>/home/clickhouse/tmp/</tmp_path>

<!-- Directory with user provided files that are accessible by 'file' table function. -->
<user_files_path>/home/clickhouse/data/user_files/</user_files_path>
mkdir -p /home/clickhouse/data/
mkdir -p /home/clickhouse/tmp/
mkdir -p /home/clickhouse/data/user_files/

chown -R clickhouse:clickhouse /home/clickhouse/

 4、ClickHouse集群配置(rpm版本)

注意:集群配置在创建分布式表时可以使用{shard}和{replac}方便创建表,在建表时也可以直接自定义shard和replac变量并且不局限于集群配置的变量,可由开发者灵活定义。集群定义的元数据在zookeeper中保存,如果修改了已定义好的集群表的集群配置可能会导致表变成只读状态,这时需要去zookeeper上查看clickhouse的元数据信息是否和当前表匹配。

在rmp安装的版本中,clickhouse服务端默认配置的 /etc/clickhouse-server/config.xml 中表明

会默认加载 /etc/metrika.xml 文件作为远程服务的替换文件,这里手动配置在其它目录

其中默认的集群服务名称为标签 incl 指定的 clickhouse_remote_servers

<remote_servers incl="clickhouse_remote_servers" />

    <!-- If element has 'incl' attribute, then for it's value will be used corresponding substitution from another file.
         By default, path to file with substitutions is /etc/metrika.xml. It could be changed in config in 'include_from' element.
         Values for substitutions are specified in /yandex/name_of_substitution elements in that file.
      -->

在 /etc/clickhouse-server/config.d/ 目录下手动配置分片副本集群文件 metrika.xml 

chmod 664 /etc/clickhouse-server/config.d/metrika.xml
chown clickhouse:clickhouse /etc/clickhouse-server/config.d/metrika.xml
vim /etc/clickhouse-server/config.d/metrika.xml

每台集群的配置文件都不一样,区别在于<macros>标签的<shard>和<replica>标签

详情查看上文 ClickHouse集群规划

<yandex>
<!-- clickhouse远程服务:集群配置(3个分片,每个分片只有2个副本) -->    
<clickhouse_remote_servers>
    <!-- 集群名称:名字自定义 -->
    <cluster_3shards_2replicas>
        <!-- shard:数据分片1  -->
        <shard>
            <!-- 一份数据分三台机子,每台机子分三分之二数据,三分之一为当前分片的数据,三分之一为其它分片的副本数据 -->
            <internal_replication>true</internal_replication> <!-- 分片内的副本节点之间相互同步 -->
            <replica>  <!-- replica:当前分片的第一个副本 -->
                <host>hadoop004</host>
                <port>9000</port>
            </replica>
            <replica>  <!-- replica:当前分片的第二个副本 -->
                <host>hadoop001</host>
                <port>9000</port>
            </replica>
        </shard>
        <!-- shard:数据分片2  -->
        <shard>
            <!-- false:一次性写入所有副本 true:分别写入其中一个副本,配合zk来进行数据复制 -->
            <internal_replication>true</internal_replication> <!-- 分片内的副本节点之间相互同步 -->
            <replica>  <!-- replica:当前分片的第一个副本 -->
                <host>hadoop005</host>
                <port>9000</port>
            </replica>
            <replica>  <!-- replica:当前分片的第二个副本 -->
                <host>hadoop002</host>
                <port>9000</port>
            </replica>
        </shard>
        <!-- shard:数据分片3  -->
        <shard>
            <internal_replication>true</internal_replication> <!-- 分片内的副本节点之间相互同步 -->
            <replica>  <!-- replica:当前分片的第一个副本 -->
                <host>hadoop006</host>
                <port>9000</port>
            </replica>
            <replica>  <!-- replica:当前分片的第二个副本 -->
                <host>hadoop003</host>
                <port>9000</port>
            </replica>
        </shard>
    </cluster_3shards_2replicas>
</clickhouse_remote_servers>
 
<!-- zookeeper集群配置 -->
<zookeeper-servers>
  <node index="1">
    <host>hadoop001</host>
    <port>2181</port>
  </node>
  <node index="2">
    <host>hadoop002</host>
    <port>2181</port>
  </node>
  <node index="3">
    <host>hadoop003</host>
    <port>2181</port>
  </node>
</zookeeper-servers>
 
<!-- 宏定义:全局变量,可用{shard}和{replica}表示各个机器配置的值 --> 
<!-- <macros> -->
    <!-- layer:表示不同集群,这里只用一个集群,所有机器都一样 -->
    <!-- <layer>layer01</layer>  -->
    <!-- 分片名称:一个分片两台机器 -->
    <!-- <shard>shard01</shard> -->
    <!-- <shard>shard02</shard> -->
    <!-- <shard>shard03</shard> -->
    <!-- <shard>shard01</shard> -->
    <!-- <shard>shard02</shard> -->
    <!-- <shard>shard03</shard> -->
    <!-- 副本名称:每台机器一个副本 -->
    <!-- <replica>replica01_01</replica> -->
    <!-- <replica>replica02_02</replica> -->
    <!-- <replica>replica03_03</replica> -->
    <!-- <replica>replica01_01</replica> -->
    <!-- <replica>replica02_02</replica> -->
    <!-- <replica>replica03_03</replica> -->
<!-- </macros> -->

<!-- Hadoop001 -->      
<macros>
    <shard>shard01</shard>
    <replica>replica_01_02</replica>
</macros>
 
<!--不限制访问来源ip地址-->
<networks>
   <ip>::/0</ip>
</networks>
 
<!--数据压缩方式,默认为lz4-->
<clickhouse_compression>
<case>
  <min_part_size>10000000000</min_part_size>
  <min_part_size_ratio>0.01</min_part_size_ratio>                                                                                                                                       
  <method>lz4</method>
</case>
</clickhouse_compression>
 
</yandex>

每台机器上只有<macros> 标签不一样,这里配置的3分片2副本的不同节点参数如下

<!-- Hadoop001 -->		
<macros>
    <shard>shard01</shard>
    <replica>replica_01_02</replica>
</macros>

<!-- Hadoop002 -->
<macros>
    <shard>shard02</shard>
    <replica>replica_02_02</replica>
</macros>

<!-- Hadoop003 -->
<macros>
    <shard>shard03</shard>
    <replica>replica_03_02</replica>
</macros>

<!-- Hadoop004 -->
<macros>
    <shard>shard01</shard>
    <replica>replica_01_01</replica>
</macros>

<!-- Hadoop005 -->
<macros>
    <shard>shard02</shard>
    <replica>replica_02_01</replica>
</macros>

<!-- Hadoop006 -->
<macros>
    <shard>shard03</shard>
    <replica>replica_03_01</replica>
</macros>
sed -n '78, 81p' /etc/clickhouse-server/config.d/metrika.xml
clickhouse-client -u default --password "" --query "SELECT * FROM system.clusters"

配置之后,使用过一次该集群,clickhouse会在根目录下将本机配置写入macros文件 

cat /home/clickhouse/macros

 [root@hadoop001 ~]# cat /home/clickhouse/macros
export shard=shard01
export replica=replica_01_02 

..............................................

[root@hadoop006 ~]# cat /home/clickhouse/macros
export shard=shard03
export replica=replica_03_01

 5、ClickHouse主集群配置(tgz版本)

 在 tgz 安装的版本中,clickhouse服务端默认配置的 /etc/clickhouse-server/config.xml 中已经默认配置了三个本地测试的分片副本集群,分别为 

本地分片测试集群:test_shard_localhost
本地两个分片测试集群:test_cluster_two_shards_localhost
两分片测试集群:test_cluster_two_shards
两分片内部复制测试集群 :test_cluster_two_shards_internal_replication
本地分片安全集群:test_shard_localhost_secure
测试不推荐的分片集群:test_unavailable_shard

clickhouse-client -u default --password "" --query "SELECT * FROM system.clusters"

 ①修改 /etc/clickhouse-server/config.xml 文件,把 remote_servers 标签的默认配置注释掉

②在 /etc/clickhouse-server/config.xml 文件中添加以下配置,

    <!-- 远程服务(分布式配置)集群名称指定为:clickhouse_remote_servers -->
    <!-- <remote_servers incl="clickhouse_remote_servers" /> -->  
    <!-- 在使用复制表时,使用ZK存储副本的元数据 -->
    <zookeeper incl="zookeeper-servers" optional="true" />
    <!-- 远程服务 remote_servers 的默认替换文件 -->
    <include_from>/etc/metrika.xml</include_from>
<!-- clickhouse远程服务:集群配置(3个分片,每个分片只有2个副本) -->    
<remote_servers>
    <!-- 集群名称:名字自定义 -->
    <cluster_3shards_2replicas>
        <!-- shard:数据分片1  -->
        <shard>
            <!-- 一份数据分三台机子,每台机子分三分之二数据,三分之一为当前分片的数据,三分之一为其它分片的副本数据 -->
            <internal_replication>true</internal_replication> <!-- 分片内的副本节点之间相互同步 -->
            <replica>  <!-- replica:当前分片的第一个副本 -->
                <host>hadoop004</host>
                <port>9000</port>
            </replica>
            <replica>  <!-- replica:当前分片的第二个副本 -->
                <host>hadoop001</host>
                <port>9000</port>
            </replica>
        </shard>
        <!-- shard:数据分片2  -->
        <shard>
            <!-- false:一次性写入所有副本 true:分别写入其中一个副本,配合zk来进行数据复制 -->
            <internal_replication>true</internal_replication> <!-- 分片内的副本节点之间相互同步 -->
            <replica>  <!-- replica:当前分片的第一个副本 -->
                <host>hadoop005</host>
                <port>9000</port>
            </replica>
            <replica>  <!-- replica:当前分片的第二个副本 -->
                <host>hadoop002</host>
                <port>9000</port>
            </replica>
        </shard>
        <!-- shard:数据分片3  -->
        <shard>
            <internal_replication>true</internal_replication> <!-- 分片内的副本节点之间相互同步 -->
            <replica>  <!-- replica:当前分片的第一个副本 -->
                <host>hadoop006</host>
                <port>9000</port>
            </replica>
            <replica>  <!-- replica:当前分片的第二个副本 -->
                <host>hadoop003</host>
                <port>9000</port>
            </replica>
        </shard>
    </cluster_3shards_2replicas>
</remote_servers>
<!-- 宏定义:全局变量,可用{shard}和{replica}表示各个机器配置的值 --> 
<!-- <macros> -->
    <!-- layer:表示不同集群,这里只用一个集群,所有机器都一样 -->
    <!-- <layer>layer01</layer>  -->
    <!-- 分片名称:一个分片两台机器 -->
    <!-- <shard>shard01</shard> -->
    <!-- <shard>shard02</shard> -->
    <!-- <shard>shard03</shard> -->
    <!-- <shard>shard01</shard> -->
    <!-- <shard>shard02</shard> -->
    <!-- <shard>shard03</shard> -->
    <!-- 副本名称:每台机器一个副本 -->
    <!-- <replica>replica01_01</replica> -->
    <!-- <replica>replica02_02</replica> -->
    <!-- <replica>replica03_03</replica> -->
    <!-- <replica>replica01_01</replica> -->
    <!-- <replica>replica02_02</replica> -->
    <!-- <replica>replica03_03</replica> -->
<!-- </macros> -->

<!-- Hadoop001 -->      
<macros>
    <shard>shard01</shard>
    <replica>replica_01_02</replica>
</macros>
    <!-- zookeeper集群配置 -->
     <zookeeper-servers>
       <node index="1">
         <host>hadoop001</host>
         <port>2181</port>
       </node>
       <node index="2">
         <host>hadoop002</host>
         <port>2181</port>
       </node>
       <node index="3">
         <host>hadoop003</host>
         <port>2181</port>
       </node>
     </zookeeper-servers>

 ③查看<macros>标签配置

sed -n '751, 755p' /etc/clickhouse-server/config.xml
sed -n '78, 81p' /etc/clickhouse-server/config.d/metrika.xml
chmod 664 /etc/clickhouse-server/config.xml
chown clickhouse:clickhouse /etc/clickhouse-server/config.xml
ll /etc/clickhouse-server/config.xml

 ④添加 /etc/metrika.xml 文件,配置信息和4.1中的rpm安装的配置一致

chmod 664 /etc/metrika.xml
chown clickhouse:clickhouse /etc/metrika.xml
vim /etc/metrika.xml

 6、ClickHouse目录赋权

这里配置了数据存储目录和临时目录在 /home/clickhouse下,如有其它目录自行修改

chown -R clickhouse:clickhouse /home/clickhouse/

六、ClickHouseSQL

clickhouse-client -m --query "SELECT VERSION()"
clickhouse-client -m --query "SHOW DATABASES"
clickhouse-client -m --query "SELECT * FROM system.tables"
clickhouse-client -m --query "SELECT * FROM system.functions WHERE name LIKE '%arr%';"
clickhouse-client -m --query "SELECT * FROM system.clusters"
echo "SHOW TABLES" | clickhouse-client
clickhouse-client --query="SELECT database,name,engine FROM system.tables WHERE database != 'system'"
echo -e "1\n2\n3" | clickhouse-local -q "CREATE TABLE test_table (id Int64) ENGINE = File(CSV, stdin); SELECT id FROM test_table;"

  

七、ClickHouse可视化

第三方开发的可视化界面 | ClickHouse文档

  1. Tabix GitHub - tabixio/tabix: Tabix.io UI
  2. HouseOps GitHub - HouseOps/HouseOps: A simple ClickHouse client.
  3. LightHouse GitHub - VKCOM/lighthouse: Lightweight interface for ClickHouse
  4. Redash GitHub - getredash/redash: Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data.
  5. Grafana ClickHouse plugin for Grafana | Grafana Labs
  6. DBeaver DBeaver Community | Free Universal Database Tool
  7.  Holistics Holistics | Self-service BI Platform
  8. DataGrip DataGrip: The Cross-Platform IDE for Databases & SQL by JetBrains
  9. DBM DBM | DBM Websites.

1、Tabix

①查看端口是否正常

curl http://127.0.0.1:8123

②修改 /etc/clickhouse-server/config.xml 文件

vim /etc/clickhouse-server/config.xml

③把以下注释去掉 

    <!-- Default root page on http[s] server. For example load UI from https://tabix.io/ when opening http://localhost:8123 -->
    <!--
    <http_server_default_response><![CDATA[<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>]]></http_server_default_response>
    -->

④修改clickhouse密码

vim /etc/clickhouse-server/users.xml
<password>123456</password>

⑤重启clickhouse服务

systemctl restart clickhouse-server.service

⑥验证clickhouse密码

clickhouse-client -u default --password 123456 --query "SHOW DATABASES"

⑦登录Tabix

http://hadoop004:8123

八、ClickHouse卸载

 1、关闭ClickHouse

sudo systemctl stop clickhouse-server.service

 2、rpm命令卸载

rpm -qa | grep clickhouse
rpm -e clickhouse*

  3、yum命令卸载

yum list installed | grep clickhouse
yum -y remove clickhouse-*

yum list | grep clickhouse | awk '{print $1}'
yum -y remove clickhouse-client.x86_64
yum -y remove clickhouse-client.noarch
yum -y remove clickhouse-common-dbg.x86_64
yum -y remove clickhouse-common-static.x86_64
yum -y remove clickhouse-common-static-dbg.x86_64
yum -y remove clickhouse-server.x86_64
yum -y remove clickhouse-server.noarch
yum -y remove clickhouse-server-base.x86_64
yum -y remove clickhouse-server-common.x86_64
yum -y remove clickhouse-server-common.noarch
yum -y remove clickhouse-compressor.x86_64
yum -y remove clickhouse-debuginfo.x86_64
yum -y remove clickhouse-test.x86_64
yum -y remove clickhouse-test.noarch

 4、删除配置文件

rm -rf /clickhouse
rm -rf /var/lib/clickhouse
rm -rf /var/log/clickhouse-server
rm -rf /etc/clickhouse-server
rm -rf /etc/clickhouse-client
rm -rf /etc/metrika.xml
rm -rf /etc/cron.d/clickhouse-server
rm -rf /etc/systemd/system/clickhouse-server.service
rm -rf /etc/security/limits.d/clickhouse.conf
rm -rf /var/lib/clickhouse
rm -rf /usr/bin/clickhouse-client
rm -rf /usr/bin/clickhouse-server
rm -rf /usr/bin/clickhouse-compressor
rm -rf /var/log/clickhouse-server
rm -rf /var/run/clickhouse-server
rm -rf /run/clickhouse-server
rm -rf /root/.clickhouse-client-history

 5、把查到的文件全部删掉

find / -name clickhouse

 6、ZK删除clickhouse数据

zkCli.sh
rmr /clickhouse

ClickHouse入门学习(一):https://blog.csdn.net/qq262593421/article/details/119514836

ClickHouse入门学习(二):https://blog.csdn.net/qq262593421/article/details/120293618

  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

日月星辰TEL

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值