1.收缩前检查
mysql> select * from information_schema.cluster_info;
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+
| TYPE | INSTANCE | STATUS_ADDRESS | VERSION | GIT_HASH | START_TIME | UPTIME |
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+
| tidb | 192.168.1.13:4000 | 192.168.1.13:10080 | 4.0.7 | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 27.639879212s |
| tidb | 192.168.1.14:4000 | 192.168.1.14:10080 | 4.0.7 | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 27.639884307s |
| tidb | 192.168.1.12:4000 | 192.168.1.12:10080 | 4.0.7 | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 27.639885673s |
| tidb | 192.168.1.11:4000 | 192.168.1.11:10080 | 4.0.7 | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 27.639886905s |
| pd | 192.168.1.13:2379 | 192.168.1.13:2379 | 4.0.7 | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:59:54+08:00 | 5m25.639888192s |
| pd | 192.168.1.14:2379 | 192.168.1.14:2379 | 4.0.7 | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:59:57+08:00 | 5m22.63988964s |
| pd | 192.168.1.11:2379 | 192.168.1.11:2379 | 4.0.7 | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:59:51+08:00 | 5m28.639891932s |
| pd | 192.168.1.12:2379 | 192.168.1.12:2379 | 4.0.7 | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T14:00:03+08:00 | 5m16.639893298s |
| tikv | 192.168.1.11:20160 | 192.168.1.11:20180 | 4.0.7 | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T14:00:27+08:00 | 4m52.639894473s |
| tikv | 192.168.1.12:20160 | 192.168.1.12:20180 | 4.0.7 | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T14:00:48+08:00 | 4m31.639895659s |
| tikv | 192.168.1.13:20160 | 192.168.1.13:20180 | 4.0.7 | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T14:01:06+08:00 | 4m13.639896846s |
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+
11 rows in set (0.01 sec)
我们将 14 上的tidb 服务下线。
2.执行下线
[tidb@mysql1 conf]$ tiup cluster scale-in tidbcluster --node 192.168.1.14:4000
This operation will delete the 192.168.1.14:4000 nodes in `tidbcluster` and all their data.
Do you want to continue? [y/N]:(default=N) y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [ Serial ] - ClusterOperate: operation=DestroyOperation, options={Roles:[] Nodes:[192.168.1.14:4000] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:600 IgnoreConfigCheck:false NativeSSH:false SSHType: Concurrency:5 SSHProxyHost: SSHProxyPort:22 SSHProxyUser:tidb SSHProxyIdentity:/home/tidb/.ssh/id_rsa SSHProxyUsePassword:false SSHProxyTimeout:5 SSHCustomScripts:{BeforeRestartInstance:{Raw:} AfterRestartInstance:{Raw:}} CleanupData:false CleanupLog:false CleanupAuditLog:false RetainDataRoles:[] RetainDataNodes:[] DisplayMode:default Operation:StartOperation}
Stopping component tidb
Stopping instance 192.168.1.14
Stop tidb 192.168.1.14:4000 success
Destroying component tidb
Destroying instance 192.168.1.14
Destroy 192.168.1.14 finished
- Destroy tidb paths: [/tidb/tidb-deploy/tidb-4000/log /tidb/tidb-deploy/tidb-4000 /etc/systemd/system/tidb-4000.service]
+ [ Serial ] - UpdateMeta: cluster=tidbcluster, deleted=`'192.168.1.14:4000'`
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ Refresh instance configs
- Generate config pd -> 192.168.1.11:2379 ... Done
- Generate config pd -> 192.168.1.12:2379 ... Done
- Generate config pd -> 192.168.1.13:2379 ... Done
- Generate config pd -> 192.168.1.14:2379 ... Done
- Generate config tikv -> 192.168.1.11:20160 ... Done
- Generate config tikv -> 192.168.1.12:20160 ... Done
- Generate config tikv -> 192.168.1.13:20160 ... Done
- Generate config tidb -> 192.168.1.11:4000 ... Done
- Generate config tidb -> 192.168.1.12:4000 ... Done
- Generate config tidb -> 192.168.1.13:4000 ... Done
- Generate config prometheus -> 192.168.1.11:9090 ... Done
- Generate config grafana -> 192.168.1.11:3000 ... Done
- Generate config alertmanager -> 192.168.1.11:9093 ... Done
+ Reload prometheus and grafana
- Reload prometheus -> 192.168.1.11:9090 ... Done
- Reload grafana -> 192.168.1.11:3000 ... Done
Scaled cluster `tidbcluster` in successfully
3.检查。
mysql> select * from information_schema.cluster_info;
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+------------------+
| TYPE | INSTANCE | STATUS_ADDRESS | VERSION | GIT_HASH | START_TIME | UPTIME |
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+------------------+
| tidb | 192.168.1.13:4000 | 192.168.1.13:10080 | 4.0.7 | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 6m10.744098286s |
| tidb | 192.168.1.12:4000 | 192.168.1.12:10080 | 4.0.7 | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 6m10.744105254s |
| tidb | 192.168.1.11:4000 | 192.168.1.11:10080 | 4.0.7 | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 6m10.744107476s |
| pd | 192.168.1.13:2379 | 192.168.1.13:2379 | 4.0.7 | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:59:54+08:00 | 11m8.744109566s |
| pd | 192.168.1.14:2379 | 192.168.1.14:2379 | 4.0.7 | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:59:57+08:00 | 11m5.744111928s |
| pd | 192.168.1.11:2379 | 192.168.1.11:2379 | 4.0.7 | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:59:51+08:00 | 11m11.744114463s |
| pd | 192.168.1.12:2379 | 192.168.1.12:2379 | 4.0.7 | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T14:00:03+08:00 | 10m59.744116875s |
| tikv | 192.168.1.11:20160 | 192.168.1.11:20180 | 4.0.7 | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T14:00:27+08:00 | 10m35.744118862s |
| tikv | 192.168.1.12:20160 | 192.168.1.12:20180 | 4.0.7 | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T14:00:48+08:00 | 10m14.74412098s |
| tikv | 192.168.1.13:20160 | 192.168.1.13:20180 | 4.0.7 | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T14:01:06+08:00 | 9m56.744123011s |
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+------------------+
10 rows in set (0.02 sec)
4.清理tombstone状态的 节点
tiup cluster prune tidbcluster
5.重启集群,观察是否可以启动
tidb@mysql1 conf]$ tiup cluster prune tidbcluster
[tidb@mysql1 conf]$ tiup cluster restart tidbcluster
Checking updates for component cluster... Timedout (after 2s)
Will restart the cluster tidbcluster with nodes: roles: .
Cluster will be unavailable
Do you want to continue? [y/N]:(default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [ Serial ] - RestartCluster
Stopping component alertmanager
Stopping instance 192.168.1.11
Stop alertmanager 192.168.1.11:9093 success
Stopping component grafana
Stopping instance 192.168.1.11
Stop grafana 192.168.1.11:3000 success
Stopping component prometheus
Stopping instance 192.168.1.11
Stop prometheus 192.168.1.11:9090 success
Stopping component tidb
Stopping instance 192.168.1.13
Stopping instance 192.168.1.11
Stopping instance 192.168.1.12
Stop tidb 192.168.1.12:4000 success
Stop tidb 192.168.1.13:4000 success
Stop tidb 192.168.1.11:4000 success
Stopping component tikv
Stopping instance 192.168.1.13
Stopping instance 192.168.1.11
Stopping instance 192.168.1.12
Stop tikv 192.168.1.13:20160 success
Stop tikv 192.168.1.12:20160 success
Stop tikv 192.168.1.11:20160 success
Stopping component pd
Stopping instance 192.168.1.14
Stopping instance 192.168.1.11
Stopping instance 192.168.1.12
Stopping instance 192.168.1.13
Stop pd 192.168.1.13:2379 success
Stop pd 192.168.1.12:2379 success
Stop pd 192.168.1.14:2379 success
Stop pd 192.168.1.11:2379 success
Stopping component node_exporter
Stopping instance 192.168.1.14
Stopping instance 192.168.1.11
Stopping instance 192.168.1.12
Stopping instance 192.168.1.13
Stop 192.168.1.14 success
Stop 192.168.1.12 success
Stop 192.168.1.11 success
Stop 192.168.1.13 success
Stopping component blackbox_exporter
Stopping instance 192.168.1.14
Stopping instance 192.168.1.11
Stopping instance 192.168.1.12
Stopping instance 192.168.1.13
Stop 192.168.1.14 success
Stop 192.168.1.12 success
Stop 192.168.1.13 success
Stop 192.168.1.11 success
Starting component pd
Starting instance 192.168.1.14:2379
Starting instance 192.168.1.11:2379
Starting instance 192.168.1.12:2379
Starting instance 192.168.1.13:2379
Start instance 192.168.1.14:2379 success
Start instance 192.168.1.12:2379 success
Start instance 192.168.1.13:2379 success
Start instance 192.168.1.11:2379 success
Starting component tikv
Starting instance 192.168.1.13:20160
Starting instance 192.168.1.11:20160
Starting instance 192.168.1.12:20160
Start instance 192.168.1.13:20160 success
Start instance 192.168.1.12:20160 success
Start instance 192.168.1.11:20160 success
Starting component tidb
Starting instance 192.168.1.13:4000
Starting instance 192.168.1.11:4000
Starting instance 192.168.1.12:4000
Start instance 192.168.1.12:4000 success
Start instance 192.168.1.11:4000 success
Start instance 192.168.1.13:4000 success
Starting component prometheus
Starting instance 192.168.1.11:9090
Start instance 192.168.1.11:9090 success
Starting component grafana
Starting instance 192.168.1.11:3000
Start instance 192.168.1.11:3000 success
Starting component alertmanager
Starting instance 192.168.1.11:9093
Start instance 192.168.1.11:9093 success
Starting component node_exporter
Starting instance 192.168.1.14
Starting instance 192.168.1.11
Starting instance 192.168.1.12
Starting instance 192.168.1.13
Start 192.168.1.13 success
Start 192.168.1.14 success
Start 192.168.1.12 success
Start 192.168.1.11 success
Starting component blackbox_exporter
Starting instance 192.168.1.14
Starting instance 192.168.1.11
Starting instance 192.168.1.12
Starting instance 192.168.1.13
Start 192.168.1.13 success
Start 192.168.1.12 success
Start 192.168.1.14 success
Start 192.168.1.11 success
Restarted cluster `tidbcluster` successfully
6.总结
tidb-server 缩容,不需要执行prune命令。tikv节点缩容需要执行prune;