1.环境介绍:
在centos7.9上安装tidb docker-compose版本
虚拟机配置2C/8G/40G
最小化安装
2.安装步骤
2.1 安装centos7.9
略
2.2 安装docker
(1)安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
(2)配置docker-ce的repo
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
(3)查看支撑的docker-ce版本
yum list docker-ce --showduplicates | sort -r
[root@localhost ~]# yum list docker-ce --showduplicates | sort -r
已加载插件:fastestmirror
已安装的软件包
可安装的软件包
* updates: mirrors.ustc.edu.cn
Loading mirror speeds from cached hostfile
* extras: mirrors.ustc.edu.cn
docker-ce.x86_64 3:20.10.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.9-3.el7 @docker-ce-stable
docker-ce.x86_64 3:20.10.8-3.el7 docker-ce-stable
...
(4)选择1个版本进行安装
yum install docker-ce-20.10.9-3.el7 -y
(5)启动 Docker & 设置开机自启动
systemctl start docker
systemctl enable docker
(6)配置加速器
cat <<EOF> /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://x83mabsk.mirror.aliyuncs.com"]
}
EOF
(7)生效
systemctl daemon-reload
systemctl restart docker
(8)查看docker版本
[root@localhost 20221204]# docker version
Client: Docker Engine - Community
Version: 20.10.21
API version: 1.41
Go version: go1.18.7
Git commit: baeda1f
Built: Tue Oct 25 18:04:24 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.9
API version: 1.41 (minimum version 1.12)
Go version: go1.16.8
Git commit: 79ea9d3
Built: Mon Oct 4 16:06:37 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.10
GitCommit: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
[root@localhost 20221204]#
2.3 安装docker-compose
(1)查看当前docker-compose的最新版本,经查看为2.14.0
https://github.com/docker/compose/releases
(2)下载并安装docker-compose的最新版本2.14.0
[root@localhost ~]# curl -L "https://github.com/docker/compose/releases/download/v2.14.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 42.8M 100 42.8M 0 0 4213k 0 0:00:10 0:00:10 --:--:-- 5699k
[root@localhost ~]#
(3)添加执行权限
[root@localhost ~]# chmod +x /usr/local/bin/docker-compose
(4)查看docker-compose版本
[root@localhost ~]# docker-compose --version
Docker Compose version v2.14.0
[root@localhost ~]#
2.4、下载并安装tidb
(1)下载 tidb-docker-compose
git clone https://github.com/pingcap/tidb-docker-compose.git
由于以上命令一直拒绝连接,直接登录下载tidb-docker-compose-master.zip
(2)解压tidb-docker-compose-master.zip
unzip tidb-docker-compose-master.zip
(3)进入目录
cd tidb-docker-compose-master
(4)执行docker-compose pull拉取镜像
[root@localhost tidb-docker-compose-master]# docker-compose pull
[+] Running 55/55
⠿ tispark-master Skipped - Image is already being pulled by tispark-slave0 0.0s
⠿ tikv1 Skipped - Image is already being pulled by tikv2 0.0s
⠿ pd1 Skipped - Image is already being pulled by pd0 0.0s
⠿ tikv0 Skipped - Image is already being pulled by tikv2 0.0s
⠿ pd2 Skipped - Image is already being pulled by pd0 0.0s
⠿ prometheus Pulled 174.7s
⠿ aab39f0bc16d Pull complete 145.5s
⠿ 2cd9e239cea6 Pull complete 146.4s
⠿ 0266ca3d0dd9 Pull complete 157.6s
⠿ 341681dba10c Pull complete 158.1s
⠿ 8f6074d68b9e Pull complete 158.1s
⠿ 2fa612efb95d Pull complete 158.2s
⠿ 151829c004a9 Pull complete 158.2s
⠿ 75e765061965 Pull complete 158.3s
⠿ b5a15632e9ab Pull complete
⠿ pushgateway Pulled 161.5s
⠿ 8ddc19f16526 Pull complete 143.6s
⠿ a3ed95caeb02 Pull complete 145.7s
⠿ 8279f336cdd3 Pull complete 144.3s
⠿ 92ea3322eea5 Pull complete 145.4s
⠿ tikv2 Pulled 170.9s
⠿ a5bfcc748ffc Pull complete 129.7s
⠿ 6ac6ee01b237 Pull complete 155.0s
⠿ tispark-slave0 Pulled 138.4s
⠿ 169185f82c45 Pull complete 4.7s
⠿ ca14bef7a00d Pull complete 26.6s
⠿ db403405ee2d Pull complete 26.6s
⠿ 247c061c09b7 Pull complete 128.0s
⠿ 9aecf7988d49 Pull complete 137.4s
⠿ 5c863ea94449 Pull complete 137.5s
⠿ pd0 Pulled 156.5s
⠿ 1197d937b561 Pull complete 140.2s
⠿ 145e91b0363f Pull complete 140.7s
⠿ tidb-vision Pulled 158.7s
⠿ ff3a5c916c92 Pull complete 136.0s
⠿ 25b4d1376ceb Pull complete 141.5s
⠿ 7f59066db563 Pull complete 141.8s
⠿ 552cc3a8725c Pull complete 141.9s
⠿ bdcc5af847c1 Pull complete 141.9s
⠿ bec589d0b766 Pull complete 142.3s
⠿ 20405cfd1a4f Pull complete 142.7s
⠿ c6073a35b3d7 Pull complete 142.9s
⠿ grafana Pulled 195.2s
⠿ f7e2b70d04ae Pull complete 161.4s
⠿ fc263172e074 Pull complete 161.5s
⠿ 5d125c70da52 Pull complete 162.1s
⠿ 1b52ecccba1a Pull complete 179.0s
⠿ 886991020d89 Pull complete 179.0s
⠿ 8d4018c3f38b Pull complete 179.1s
⠿ tidb Pulled 67.5s
⠿ 9d48c3bd43c5 Pull complete 26.6s
⠿ 9812b53cc158 Pull complete 28.4s
⠿ e8355eb28dda Pull complete 29.1s
⠿ 120e0a1644dc Pull complete 30.6s
⠿ 87bd45d8d814 Pull complete 66.6s
[root@localhost tidb-docker-compose-master]#
(5)查看拉取的镜像
[root@localhost tidb-docker-compose-master]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
pingcap/tidb latest 778bf9e1e051 19 months ago 145MB
pingcap/tikv latest 6e34b1d95950 19 months ago 355MB
pingcap/pd latest d55858ba1d82 19 months ago 151MB
pingcap/tispark v2.1.1 501543755826 3 years ago 894MB
grafana/grafana 6.0.1 ffd9c905f698 3 years ago 241MB
pingcap/tidb-vision latest e9b25d9f7bdb 4 years ago 47.6MB
prom/prometheus v2.2.1 cc866859f8df 4 years ago 113MB
prom/pushgateway v0.3.1 434efa6ed9db 6 years ago 13.3MB
[root@localhost tidb-docker-compose-master]#
注:关闭selinux
setenforce 0
(6)启动镜像
[root@localhost tidb-docker-compose-master]# docker-compose up -d
[+] Running 14/14
⠿ Network tidb-docker-compose-master_default Created 0.2s
⠿ Container tidb-docker-compose-master-pushgateway-1 Started 2.5s
⠿ Container tidb-docker-compose-master-tidb-vision-1 Started 3.0s
⠿ Container tidb-docker-compose-master-pd2-1 Started 3.0s
⠿ Container tidb-docker-compose-master-prometheus-1 Started 2.9s
⠿ Container tidb-docker-compose-master-grafana-1 Started 2.4s
⠿ Container tidb-docker-compose-master-pd1-1 Started 2.9s
⠿ Container tidb-docker-compose-master-pd0-1 Started 2.9s
⠿ Container tidb-docker-compose-master-tikv2-1 Started 3.8s
⠿ Container tidb-docker-compose-master-tikv1-1 Started 4.0s
⠿ Container tidb-docker-compose-master-tikv0-1 Started 4.0s
⠿ Container tidb-docker-compose-master-tispark-master-1 Started 6.9s
⠿ Container tidb-docker-compose-master-tidb-1 Started 7.2s
⠿ Container tidb-docker-compose-master-tispark-slave0-1 Started 9.0s
[root@localhost tidb-docker-compose-master]#
(7)查看已经启动的镜像
[root@localhost 20221204]#
[root@localhost 20221204]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d4b205b49388 pingcap/tispark:v2.1.1 "/opt/spark/sbin/sta…" 4 hours ago Up 4 hours 0.0.0.0:38081->38081/tcp, :::38081->38081/tcp tidb-docker-compose-master-tispark-slave0-1
f484bdcf01c7 pingcap/tidb:latest "/tidb-server --stor…" 4 hours ago Up 4 hours 0.0.0.0:4000->4000/tcp, :::4000->4000/tcp, 0.0.0.0:10080->10080/tcp, :::10080->10080/tcp tidb-docker-compose-master-tidb-1
2d94b8c34e63 pingcap/tispark:v2.1.1 "/opt/spark/sbin/sta…" 4 hours ago Up 4 hours 0.0.0.0:7077->7077/tcp, :::7077->7077/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp tidb-docker-compose-master-tispark-master-1
7cd39865285e pingcap/tikv:latest "/tikv-server --addr…" 4 hours ago Up 4 hours 20160/tcp tidb-docker-compose-master-tikv1-1
a8e6369c3884 pingcap/tikv:latest "/tikv-server --addr…" 4 hours ago Up 4 hours 20160/tcp tidb-docker-compose-master-tikv0-1
6ebfcaf9cb25 pingcap/tikv:latest "/tikv-server --addr…" 4 hours ago Up 4 hours 20160/tcp tidb-docker-compose-master-tikv2-1
4f842680e8ab prom/pushgateway:v0.3.1 "/bin/pushgateway --…" 4 hours ago Up 4 hours 9091/tcp tidb-docker-compose-master-pushgateway-1
22b4e380833e prom/prometheus:v2.2.1 "/bin/prometheus --l…" 4 hours ago Up 4 hours 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp tidb-docker-compose-master-prometheus-1
175ebd6fdd79 pingcap/pd:latest "/pd-server --name=p…" 4 hours ago Up 4 hours 2380/tcp, 0.0.0.0:49154->2379/tcp, :::49154->2379/tcp tidb-docker-compose-master-pd1-1
0a33b9414678 grafana/grafana:6.0.1 "/run.sh" 4 hours ago Up 4 hours 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp tidb-docker-compose-master-grafana-1
7a829cadcae4 pingcap/tidb-vision:latest "/bin/sh -c 'sed -i …" 4 hours ago Up 4 hours 80/tcp, 443/tcp, 2015/tcp, 0.0.0.0:8010->8010/tcp, :::8010->8010/tcp tidb-docker-compose-master-tidb-vision-1
7be8713e1bb2 pingcap/pd:latest "/pd-server --name=p…" 4 hours ago Up 4 hours 2380/tcp, 0.0.0.0:49153->2379/tcp, :::49153->2379/tcp tidb-docker-compose-master-pd0-1
687c06ad9d04 pingcap/pd:latest "/pd-server --name=p…" 4 hours ago Up 4 hours 2380/tcp, 0.0.0.0:49155->2379/tcp, :::49155->2379/tcp tidb-docker-compose-master-pd2-1
[root@localhost 20221204]#
2.5下载mysql客户端
(1)准备mysql客户端的rpm包
mysql-community-common-5.7.40-1.el7
mysql-community-libs-5.7.40-1.el7
mysql-community-client-5.7.40-1.el7
(2)卸载mariadb
rpm -e mariadb-libs-5.5.68-1.el7.x86_64 --nodeps
(3)安装mysql客户端
[root@localhost ~]# rpm -ivh mysql-community-client-5.7.40-1.el7.x86_64.rpm mysql-community-libs-5.7.40-1.el7.x86_64.rpm mysql-community-common-5.7.40-1.el7.x86_64.rpm
warning: mysql-community-client-5.7.40-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 3a79bd29: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:mysql-community-common-5.7.40-1.e################################# [ 33%]
2:mysql-community-libs-5.7.40-1.el7################################# [ 67%]
3:mysql-community-client-5.7.40-1.e################################# [100%]
[root@localhost ~]#
2.6使用mysql客户端连接tidb数据库
[root@localhost ~]# mysql -h 127.0.0.1 -P 4000 -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 5.7.25-TiDB-v5.0.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
Copyright (c) 2000, 2022, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql |
| test |
+--------------------+
5 rows in set (0.00 sec)
mysql>
2.7设置root密码并使用新密码进行登录
(1)查看用户
mysql> select user,host from mysql.user;
+------+------+
| user | host |
+------+------+
| root | % |
+------+------+
1 row in set (0.01 sec)
mysql>
(2)为用户root@'%'设置密码
ALTER USER 'root'@'%' identified by '新密码';
flush privileges;
(3)使用新密码进行登录
mysql -h 127.0.0.1 -P 4000 -u root -p
2.8 访问grafana
http://ip地址:3000
2.9访问集群可视化
http://ip地址:8010
2.10访问Spark Web UI
http://ip地址:8080
3.tidb备份、恢复
3.1备份工具介绍
mysqldump 可以从mysql迁移到tidb,数据量小的时候比较好用,但是当数据量较大了,迁移时间会比较长
Dumpling 是一个数据导出工具,该工具可以把存储在 TiDB/MySQL 中的数据导出为 SQL 或者 CSV 格式,可以用于完成逻辑上的全量备份或者导出。
TiDB Lightning 是一个数据导入工具,该工具可以把 Dumpling 或 CSV 输出格式的数据快速导入到 TiDB 中,可以用于完成逻辑上的全量恢复或者导入。
BR 是 TiDB 分布式备份恢复的命令行工具,用于对 TiDB 集群进行数据备份和恢复。相比 Dumpling 和 Mydumper,BR 更适合大数据量的场景,BR 只支持 TiDB v3.1 及以上版本。如果需要对延迟不敏感的增量备份,请参阅 BR。如果需要实时的增量备份,请参阅 TiCDC。
3.2使用mysqldump进行备份恢复
与mysql一样,读取数据,写入.sql文件
(1)通过mysqldump备份tidb的表
mysqldump -h 127.0.0.1 -P 4000 -uroot --databases test >aa.sql
(2)恢复aa.sql
mysql -h 127.0.0.1 -P 4000 -u root -p
mysql> source aa.sql
(3)查看恢复的表
mysql> select * from test.tt;
+------+------+
| a | b |
+------+------+
| 1 | 11 |
| 2 | 22 |
+------+------+
2 rows in set (0.00 sec)
mysql>
3.3 安装tidb-toolkit工具
(1)下载tidb-toolkit,登录如下url进行下载:
https://cn.pingcap.com/product-community/#TiDB
(2)上传服务器,解压并安装
tar -zxf tidb-community-toolkit-v6.4.0-linux-amd64.tar.gz
(3)查看解压的文件
[root@localhost tidb-community-toolkit-v6.4.0-linux-amd64]# ls
1105.dmctl.json 2683.drainer.json 2.spark.json bench-v1.12.0-linux-amd64.tar.gz dm-worker-v6.4.0-linux-amd64.tar.gz mydumper snapshot.json
1110.dm-worker.json 2703.pump.json 3.package.json binlogctl drainer-v6.4.0-linux-amd64.tar.gz node_exporter-v1.3.1-linux-amd64.tar.gz spark-v2.4.3-any-any.tar.gz
1112.dm-master.json 2709.pd-recover.json 3.PCC.json blackbox_exporter-v0.21.1-linux-amd64.tar.gz dumpling package-v0.0.9-linux-amd64.tar.gz sync_diff_inspector
17.tispark.json 2721.dumpling.json 5.alertmanager.json br dumpling-v6.4.0-linux-amd64.tar.gz PCC-1.0.1-linux-amd64.tar.gz tidb-lightning-ctl
183.dm.json 2775.prometheus.json 7.blackbox_exporter.json br-v6.4.0-linux-amd64.tar.gz errdoc-v4.0.7-linux-amd64.tar.gz pd-recover-v6.4.0-linux-amd64.tar.gz tidb-lightning-v6.4.0-linux-amd64.tar.gz
185.server.json 2791.tidb-lightning.json 7.node_exporter.json cdc-v6.4.0-linux-amd64.tar.gz etcdctl prometheus-v6.4.0-linux-amd64.tar.gz tikv-importer-v4.0.2-linux-amd64.tar.gz
1.dba.json 2797.br.json 80.tikv-importer.json dba-v1.0.4-linux-amd64.tar.gz export-2022-12-04T23:52:52+08:00 pump-v6.4.0-linux-amd64.tar.gz timestamp.json
1.index.json 279.bench.json 9.errdoc.json dmctl-v6.4.0-linux-amd64.tar.gz grafana-v6.4.0-linux-amd64.tar.gz reparo tispark-v2.4.1-any-any.tar.gz
1.root.json 2809.grafana.json alertmanager-v0.17.0-linux-amd64.tar.gz dm-master-v6.4.0-linux-amd64.tar.gz keys root.json tiup-linux-amd64.tar.gz
225.tiup.json 2884.cdc.json arbiter dm-v1.11.0-linux-amd64.tar.gz local_install.sh server-v1.11.0-linux-amd64.tar.gz tiup-v1.11.0-linux-amd64.tar.gz
[root@localhost tidb-community-toolkit-v6.4.0-linux-amd64]#
(4)解压dumpling-v6.4.0-linux-amd64.tar.gz
tar -zxf dumpling-v6.4.0-linux-amd64.tar.gz
(5)解压tidb-lightning-v6.4.0-linux-amd64.tar.gz
tar -zxf tidb-lightning-v6.4.0-linux-amd64.tar.gz
3.4使用dumpling进行备份
(1)创建备份目录
[root@localhost dumpling]#mkdir -p /tmp/dumpling
(2)使用dumpling进行备份
[root@localhost dumpling]# /root/tidb-toolkit/tidb-community-toolkit-v6.4.0-linux-amd64/dumpling -u root -p oracle -P 4000 -h 127.0.0.1 --filetype sql -t 8 -o /tmp/dumpling -r 200000 -F 256MiB -B "test"
Release version: v6.4.0
Git commit hash: cf36a9ce2fe1039db3cf3444d51930b887df18a1
Git branch: heads/refs/tags/v6.4.0
Build timestamp: 2022-11-13 05:17:51Z
Go version: go version go1.19.2 linux/amd64
[2022/12/05 02:53:56.370 +08:00] [INFO] [versions.go:54] ["Welcome to dumpling"] ["Release Version"=v6.4.0] ["Git Commit Hash"=cf36a9ce2fe1039db3cf3444d51930b887df18a1] ["Git Branch"=heads/refs/tags/v6.4.0] ["Build timestamp"="2022-11-13 05:17:51"] ["Go Version"="go version go1.19.2 linux/amd64"]
[2022/12/05 02:53:56.374 +08:00] [INFO] [version.go:398] ["detect server version"] [type=TiDB] [version=5.0.1]
{"level":"warn","ts":"2022-12-05T02:54:06.388+0800","logger":"etcd-client","caller":"v3@v3.5.2/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0007601c0/pd2:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp: lookup pd1 on 192.168.40.2:53: server misbehaving\""}
[2022/12/05 02:54:06.388 +08:00] [INFO] [dump.go:1422] ["meet error while check whether fetched pd addr and TiDB belong to one cluster. This won't affect dump process"] [error="context deadline exceeded"] [pdAddrs="[pd2:2379,pd0:2379,pd1:2379]"]
[2022/12/05 02:54:06.389 +08:00] [WARN] [dump.go:1476] ["If the amount of data to dump is large, criteria: (data more than 60GB or dumped time more than 10 minutes)\nyou'd better adjust the tikv_gc_life_time to avoid export failure due to TiDB GC during the dump process.\nBefore dumping: run sql `update mysql.tidb set VARIABLE_VALUE = '720h' where VARIABLE_NAME = 'tikv_gc_life_time';` in tidb.\nAfter dumping: run sql `update mysql.tidb set VARIABLE_VALUE = '10m' where VARIABLE_NAME = 'tikv_gc_life_time';` in tidb.\n"]
[2022/12/05 02:54:06.423 +08:00] [INFO] [dump.go:131] ["begin to run Dump"] [conf="{\"s3\":{\"endpoint\":\"\",\"region\":\"\",\"storage-class\":\"\",\"sse\":\"\",\"sse-kms-key-id\":\"\",\"acl\":\"\",\"access-key\":\"\",\"secret-access-key\":\"\",\"provider\":\"\",\"force-path-style\":true,\"use-accelerate-endpoint\":false,\"role-arn\":\"\",\"external-id\":\"\",\"object-lock-enabled\":false},\"gcs\":{\"endpoint\":\"\",\"storage-class\":\"\",\"predefined-acl\":\"\",\"credentials-file\":\"\"},\"azblob\":{\"endpoint\":\"\",\"account-name\":\"\",\"account-key\":\"\",\"access-tier\":\"\"},\"AllowCleartextPasswords\":false,\"SortByPk\":true,\"NoViews\":true,\"NoSequences\":true,\"NoHeader\":false,\"NoSchemas\":false,\"NoData\":false,\"CompleteInsert\":false,\"TransactionalConsistency\":true,\"EscapeBackslash\":true,\"DumpEmptyDatabase\":true,\"PosAfterConnect\":false,\"CompressType\":0,\"Host\":\"127.0.0.1\",\"Port\":4000,\"Threads\":8,\"User\":\"root\",\"Security\":{\"CAPath\":\"\",\"CertPath\":\"\",\"KeyPath\":\"\"},\"LogLevel\":\"info\",\"LogFile\":\"\",\"LogFormat\":\"text\",\"OutputDirPath\":\"/tmp/dumpling\",\"StatusAddr\":\":8281\",\"Snapshot\":\"437827678069325825\",\"Consistency\":\"snapshot\",\"CsvNullValue\":\"\\\\N\",\"SQL\":\"\",\"CsvSeparator\":\",\",\"CsvDelimiter\":\"\\\"\",\"Databases\":[\"test\"],\"Where\":\"\",\"FileType\":\"sql\",\"ServerInfo\":{\"ServerType\":3,\"ServerVersion\":\"5.0.1\",\"HasTiKV\":true},\"Rows\":200000,\"ReadTimeout\":900000000000,\"TiDBMemQuotaQuery\":0,\"FileSize\":268435456,\"StatementSize\":1000000,\"SessionParams\":{\"tidb_snapshot\":\"437827678069325825\"},\"Tables\":{},\"CollationCompatible\":\"loose\"}"]
[2022/12/05 02:54:06.630 +08:00] [INFO] [conn.go:44] ["cannot execute query"] [retryTime=1] [sql="select distinct policy_name from information_schema.placement_policies where policy_name is not null;"] [args=null] [error="sql: select distinct policy_name from information_schema.placement_policies where policy_name is not null;, args: []: Error 1146: Table 'information_schema.placement_policies' doesn't exist"] [errorVerbose="Error 1146: Table 'information_schema.placement_policies' doesn't exist\nsql: select distinct policy_name from information_schema.placement_policies where policy_name is not null;, args: []\ngithub.com/pingcap/tidb/dumpling/export.simpleQueryWithArgs\n\tgithub.com/pingcap/tidb/dumpling/export/sql.go:1147\ngithub.com/pingcap/tidb/dumpling/export.(*BaseConn).QuerySQL.func1\n\tgithub.com/pingcap/tidb/dumpling/export/conn.go:42\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry\n\tgithub.com/pingcap/tidb/br/pkg/utils/retry.go:52\ngithub.com/pingcap/tidb/dumpling/export.(*BaseConn).QuerySQL\n\tgithub.com/pingcap/tidb/dumpling/export/conn.go:34\ngithub.com/pingcap/tidb/dumpling/export.ListAllPlacementPolicyNames\n\tgithub.com/pingcap/tidb/dumpling/export/sql.go:368\ngithub.com/pingcap/tidb/dumpling/export.(*Dumper).dumpDatabases\n\tgithub.com/pingcap/tidb/dumpling/export/dump.go:375\ngithub.com/pingcap/tidb/dumpling/export.(*Dumper).Dump\n\tgithub.com/pingcap/tidb/dumpling/export/dump.go:295\nmain.main\n\t./main.go:74\nruntime.main\n\truntime/proc.go:250\nruntime.goexit\n\truntime/asm_amd64.s:1594"]
[2022/12/05 02:54:06.640 +08:00] [INFO] [writer.go:265] ["no data written in table chunk"] [database=test] [table=tt] [chunkIdx=0]
[2022/12/05 02:54:06.640 +08:00] [INFO] [collector.go:239] ["backup success summary"] [total-ranges=4] [ranges-succeed=4] [ranges-failed=0] [total-take=10.213925ms] [total-kv-size=69B] [average-speed=6.755kB/s] [total-rows=2]
[2022/12/05 02:54:06.641 +08:00] [INFO] [main.go:81] ["dump data successfully, dumpling will exit now"]
(3)提示dump data successfully, dumpling will exit now证明备份成功
(4)查看备份文件
[root@localhost dumpling]# ls -lt
total 16
-rw-r--r--. 1 root root 146 Dec 5 02:54 metadata
-rw-r--r--. 1 root root 69 Dec 5 02:54 test.tt.0000000010000.sql
-rw-r--r--. 1 root root 165 Dec 5 02:54 test.tt-schema.sql
-rw-r--r--. 1 root root 95 Dec 5 02:54 test-schema-create.sql
[root@localhost dumpling]#
3.5 使用lightning进行恢复
注意:TiDB Lightning支持dumpling导出的数据文件,不支持BR工具备份的文件。
(1)配置tidb-lightning.toml
[root@localhost tidb-community-toolkit-v6.4.0-linux-amd64]# cat tidb-lightning.toml
[lightning]
# 转换数据的并发数,默认为逻辑 CPU 数量,不需要配置。
# 混合部署的情况下可以配置为逻辑 CPU 的 75% 大小。
# region-concurrency =
# 日志
level = "info"
file = "tidb-lightning.log"
[tikv-importer]
# backend 设置为 local 模式
backend = "local"
# 设置本地临时存储路径
sorted-kv-dir = "/tmp/dumpling"
[mydumper]
# Mydumper 源数据目录。
data-source-dir = "/tmp/dumpling"
[tidb]
# 目标集群的信息。tidb-server 的监听地址,填一个即可。
host = "192.168.40.142"
port = 4000
user = "root"
password = "1"
# 表架构信息在从 TiDB 的“状态端口”获取。
status-port = 10080
# pd-server 的地址,填一个即可
pd-addr = "127.0.0.1:49153"
[root@localhost tidb-community-toolkit-v6.4.0-linux-amd64]#
(2)运行tidb-lightning
[root@localhost tidb-community-toolkit-v6.4.0-linux-amd64]# nohup ./tidb-lightning -config tidb-lightning.toml > nohup.out &
--恢复报错,还在查原因。
[2022/12/05 03:11:36.146 +08:00] [ERROR] [lightning.go:519] ["restore failed"] [error="[Lightning:KV:ErrCreateKVClient]create kv client error: [PD:client:ErrClientCreateTSOStream]create TSO stream failed, retry timeout"]
[2022/12/05 03:11:36.147 +08:00] [ERROR] [main.go:103] ["tidb lightning encountered error stack info"] [error="[Lightning:KV:ErrCreateKVClient]create kv client error: [PD:client:ErrClientCreateTSOStream]create TSO stream failed, retry timeout"]
[2022/12/05 03:11:46.105 +08:00] [WARN] [config.go:827] ["currently only per-task configuration can be applied, global configuration changes can only be made on startup"] ["global config changes"="[lightning.level,lightning.file]"]
[2022/12/05 03:11:46.105 +08:00] [INFO] [lightning.go:382] [cfg] [cfg="{\"id\":1670181106105605303,\"lightning\":{\"table-concurrency\":6,\"index-concurrency\":2,\"region-concurrency\":2,\"io-concurrency\":5,\"check-requirements\":true,\"meta-schema-name\":\"lightning_metadata\",\"max-error\":{\"type\":0},\"task-info-schema-name\":\"lightning_task_info\"},\"tidb\":{\"host\":\"192.168.40.142\",\"port\":4000,\"user\":\"root\",\"status-port\":10080,\"pd-addr\":\"127.0.0.1:49153\",\"sql-mode\":\"ONLY_FULL_GROUP_BY,NO_AUTO_CREATE_USER\",\"tls\":\"false\",\"security\":{\"ca-path\":\"\",\"cert-path\":\"\",\"key-path\":\"\",\"redact-info-log\":false},\"max-allowed-packet\":67108864,\"distsql-scan-concurrency\":15,\"build-stats-concurrency\":20,\"index-serial-scan-concurrency\":20,\"checksum-table-concurrency\":2,\"vars\":null},\"checkpoint\":{\"schema\":\"tidb_lightning_checkpoint\",\"driver\":\"file\",\"enable\":true,\"keep-after-success\":\"remove\"},\"mydumper\":{\"read-block-size\":65536,\"batch-size\":0,\"batch-import-ratio\":0,\"source-id\":\"\",\"data-source-dir\":\"file:///tmp/dumpling\",\"character-set\":\"auto\",\"csv\":{\"separator\":\",\",\"delimiter\":\"\\\"\",\"terminator\":\"\",\"null\":\"\\\\N\",\"header\":true,\"trim-last-separator\":false,\"not-null\":false,\"backslash-escape\":true},\"max-region-size\":268435456,\"filter\":[\"*.*\",\"!mysql.*\",\"!sys.*\",\"!INFORMATION_SCHEMA.*\",\"!PERFORMANCE_SCHEMA.*\",\"!METRICS_SCHEMA.*\",\"!INSPECTION_SCHEMA.*\"],\"files\":null,\"no-schema\":false,\"case-sensitive\":false,\"strict-format\":false,\"default-file-rules\":true,\"ignore-data-columns\":null,\"data-character-set\":\"binary\",\"data-invalid-char-replace\":\"�\"},\"tikv-importer\":{\"addr\":\"\",\"backend\":\"local\",\"on-duplicate\":\"replace\",\"max-kv-pairs\":4096,\"send-kv-pairs\":32768,\"region-split-size\":0,\"region-split-keys\":0,\"sorted-kv-dir\":\"/tmp/dumpling\",\"disk-quota\":9223372036854775807,\"range-concurrency\":16,\"duplicate-resolution\":\"none\",\"incremental-import\":false,\"engine-mem-cache-size\":536870912,\"local-writer-mem-cache-size\":134217728,\"store-write-bwlimit\":0},\"post-restore\":{\"checksum\":\"required\",\"analyze\":\"optional\",\"level-1-compact\":false,\"post-process-at-last\":true,\"compact\":false},\"cron\":{\"switch-mode\":\"5m0s\",\"log-progress\":\"5m0s\",\"check-disk-quota\":\"1m0s\"},\"routes\":null,\"security\":{\"ca-path\":\"\",\"cert-path\":\"\",\"key-path\":\"\",\"redact-info-log\":false},\"black-white-list\":{\"do-tables\":null,\"do-dbs\":null,\"ignore-tables\":null,\"ignore-dbs\":null}}"]
[2022/12/05 03:11:46.115 +08:00] [INFO] [lightning.go:483] ["load data source start"]
[2022/12/05 03:11:46.115 +08:00] [INFO] [loader.go:450] ["[loader] file is filtered by file router"] [path=metadata]
[2022/12/05 03:11:46.115 +08:00] [INFO] [lightning.go:486] ["load data source completed"] [takeTime=291.086µs] []
[2022/12/05 03:11:46.115 +08:00] [INFO] [checkpoints.go:1014] ["open checkpoint file failed, going to create a new one"] [path=/tmp/tidb_lightning_checkpoint.pb] []
[2022/12/05 03:12:51.753 +08:00] [ERROR] [lightning.go:519] ["restore failed"] [error="[Lightning:KV:ErrCreateKVClient]create kv client error: [PD:client:ErrClientCreateTSOStream]create TSO stream failed, retry timeout"]
[2022/12/05 03:12:51.753 +08:00] [ERROR] [main.go:103] ["tidb lightning encountered error stack info"] [error="[Lightning:KV:ErrCreateKVClient]create kv client error: [PD:client:ErrClientCreateTSOStream]create TSO stream failed, retry timeout"]
[root@localhost tidb-community-toolkit-v6.4.0-linux-amd64]#
[root@localhost tidb-community-toolkit-v6.4.0-linux-amd64]#
(3)使用TiDB Lightning时需要注意以下几点:
TiDB Lightning运行后,TiDB集群将无法正常对外提供服务
若tidb-lightning崩溃,集群会留在“导入模式”。若忘记转回“普通模式”,集群会产生大量未压缩的文件,继而消耗CPU并导致延迟。此时,需要使用 tidb-lightning-ctl 手动将集群转回“普通模式”:
tidb-lightning-ctl --switch-mode=normal
(4)参考:
https://blog.csdn.net/solihawk/article/details/118691591
https://www.cnblogs.com/luckyplj/p/15732313.html