clickhouse是列式存储数据库,主要用于在线分析处理查询(olap),能够使用sql查询实时生成分析数据报告
安装部署
下载地址
https://mirrors.aliyun.com/clickhouse/tgz/stable/clickhouse-server-22.2.2.1.tgz
https://mirrors.aliyun.com/clickhouse/tgz/stable/clickhouse-common-static-dbg-22.2.2.1.tgz
https://mirrors.aliyun.com/clickhouse/tgz/stable/clickhouse-common-static-22.2.2.1.tgz
https://mirrors.aliyun.com/clickhouse/tgz/stable/clickhouse-client-22.2.2.1.tgz
解压
tar -zxvf clickhouse-client-22.2.2.1.tgz
tar -zxvf clickhouse-common-static-22.2.2.1.tgz
tar -zxvf clickhouse-common-static-dbg-22.2.2.1.tgz
tar -zxvf clickhouse-server-22.2.2.1.tgz
安装 clickhouse-server的时候会默认创建default用户,会让你输入密码,不输入密码可以直接回车
./clickhouse-client-22.2.2.1/install/doinst.sh
./clickhouse-common-static-22.2.2.1/install/doinst.sh
./clickhouse-common-static-dbg-22.2.2.1/install/doinst.sh
./clickhouse-server-22.2.2.1/install/doinst.sh
启动
clickhouse start
连接clickhouse 我这里不设置密码
clickhouse-client -u default
配置文件目录
/etc/clickhouse-server
日志目录
/var/log/clickhouse-server
数据文件目录
/var/lib/clickhouse
连接客户端
开启远程访问,才能连接客户端
cd /etc/clickhouse-server
设置可以编辑文件
chmod u+w config.xml
vi config.xml
把下面的注释去掉
<listen_host>::</listen_host>
然后重启clickhouse
clickhouse restart
然后使用dbeaver进行连接
MergeTree引擎
创建表
按照日期分区,按照id和sku_id进行排序 创建表
create table t_order_mt(
id UInt32,
sku_id String,
total_amount Decimal(16,2),
create_time Datetime
) engine =MergeTree
partition by toYYYYMMDD(create_time)
primary key (id)
order by (id,sku_id);
insert into t_order_mt values
(101,'sku_001',1000.00,'2020-06-01 12:00:00') ,
(102,'sku_002',2000.00,'2020-06-01 11:00:00'),
(102,'sku_004',2500.00,'2020-06-01 12:00:00'),
(102,'sku_002',2000.00,'2020-06-01 13:00:00'),
(102,'sku_002',12000.00,'2020-06-01 13:00:00'),
(102,'sku_002',600.00,'2020-06-02 12:00:00');
SELECT * from t_order_mt;
6月2号 在一个区,6月1号在一个区
创建表之后就会在这里存储
在下面的目录里面会自动给表分好区
cd /var/lib/clickhouse/data/default/t_order_mt
二级索引
create table t_order_mt2(
id UInt32,
sku_id String,
total_amount Decimal(16,2),
create_time Datetime,
INDEX a total_amount TYPE minmax GRANULARITY 5
) engine =MergeTree
partition by toYYYYMMDD(create_time)
primary key (id)
order by (id, sku_id);
insert into t_order_mt2 values
(101,'sku_001',1000.00,'2020-06-01 12:00:00') ,
(102,'sku_002',2000.00,'2020-06-01 11:00:00'),
(102,'sku_004',2500.00,'2020-06-01 12:00:00'),
(102,'sku_002',2000.00,'2020-06-01 13:00:00'),
(102,'sku_002',12000.00,'2020-06-01 13:00:00'),
(102,'sku_002',600.00,'2020-06-02 12:00:00');
SELECT * from t_order_mt2 WHERE total_amount > 900;
一级索引存储到primary.idx
二级索引的位置
TTl
过期时间不能使用到主键上
create table t_order_mt3(
id UInt32,
sku_id String,
total_amount Decimal(16,2) TTL create_time+interval 10 SECOND,
create_time Datetime
) engine =MergeTree
partition by toYYYYMMDD(create_time)
primary key (id)
order by (id, sku_id);
insert into t_order_mt3 values
(106,'sku_001',1000.00,'2023-11-22 21:17:00'),
(107,'sku_002',2000.00,'2023-11-22 21:17:10'),
(110,'sku_003',600.00,'2023-11-22 21:17:20');
SELECT * from t_order_mt3;
当创建日期超过10秒之后,那么金额就会变成0
物化引擎,同步mysql-binlog
在自己的mysql的my.conf文件下配置下面的内容,开启binlog,并重启mysql
[client]
default_character_set=utf8
[mysqld]
collation_server = utf8_general_ci
character_set_server=utf8
## 设置server_id,同一局域网中需要唯一
server_id=1
## 开启二进制日志功能
log-bin=mall-mysql-bin
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=row
#开启gtid模式
gtid-mode=on
#强制gtid一致性,开启后对特定的create table不支持
enforce-gtid-consistency=on
#从库binlog记录主库同步的操作日志
log-slave-updates=1
登录clickhouse
clickhouse-client -u default
在clickhouse中,设置开启物化引擎,并创建数据库,连接mysql的数据
set allow_experimental_database_materialized_mysql=1;
CREATE DATABASE shucang ENGINE = MaterializeMySQL('192.168.10.104:13306','shucang','root','123456');
在mysql添加数据,clickhouse同步更新,注意不能再clickhouse修改,只能查询
docker安装普罗米修斯
创建监听clickhouse文件
mkdir /home/prometheus
cd /home/prometheus/
vi prometheus.yml
global:
scrape_interval: 60s
evaluation_interval: 60sscrape_configs:
- job_name: prometheus
static_configs:
- targets: ['192.168.10.104:9090']- job_name: clickhouse
static_configs:
- targets: ['192.168.10.102:9363']
启动普罗米修斯
docker run -d --name prometheus -p 9090:9090 -v /home/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
修改clickhouse配置
cd /etc/clickhouse-server
vi config.xml
把这一段注释打开
重启clickhouse
clickhouse restart
访问普罗米修斯
http://192.168.10.104:9090/targets
docker安装grafana
mkdir /home/grafana-storage
chmod 777 -R /home/grafana-storage
docker run -d --name grafana -p 3000:3000 --name=grafana -v /home/grafana-storage:/var/lib/grafana grafana/grafana
浏览器访问grafana,账号admin 密码admin
http://192.168.10.104:3000/login
添加数据源,并输入连接配置
访问下面的地址,可以选择自己适合的模板
点击下载json文件
导入json文件