【TIDB】tikv 收缩 Error: failed to start tikv: failed to start: 192.168.1.14 tikv-20160.service

1.检查集群 

mysql> select * from information_schema.cluster_info;
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+------------------+
| TYPE | INSTANCE           | STATUS_ADDRESS     | VERSION | GIT_HASH                                 | START_TIME                | UPTIME           |
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+------------------+
| tidb | 192.168.1.13:4000  | 192.168.1.13:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T13:03:56+08:00 | 20m24.438042855s |
| tidb | 192.168.1.14:4000  | 192.168.1.14:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T13:23:06+08:00 | 1m14.438048582s  |
| tidb | 192.168.1.11:4000  | 192.168.1.11:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T13:03:56+08:00 | 20m24.438050469s |
| tidb | 192.168.1.12:4000  | 192.168.1.12:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T13:03:56+08:00 | 20m24.438051689s |
| pd   | 192.168.1.13:2379  | 192.168.1.13:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:03:48+08:00 | 20m32.438052829s |
| pd   | 192.168.1.14:2379  | 192.168.1.14:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:23:00+08:00 | 1m20.438054002s  |
| pd   | 192.168.1.11:2379  | 192.168.1.11:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:03:48+08:00 | 20m32.438055173s |
| pd   | 192.168.1.12:2379  | 192.168.1.12:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:03:48+08:00 | 20m32.43805927s  |
| tikv | 192.168.1.13:20160 | 192.168.1.13:20180 | 4.0.7   | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T13:03:50+08:00 | 20m30.438060594s |
| tikv | 192.168.1.14:20160 | 192.168.1.14:20180 | 4.0.7   | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T13:23:04+08:00 | 1m16.438061762s  |
| tikv | 192.168.1.11:20160 | 192.168.1.11:20180 | 4.0.7   | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T13:03:50+08:00 | 20m30.438062906s |
| tikv | 192.168.1.12:20160 | 192.168.1.12:20180 | 4.0.7   | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T13:03:50+08:00 | 20m30.438064144s |
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+------------------+
12 rows in set (0.01 sec)

2.执行收缩 

--当前四个节点,收缩为3个,14节点不要了。
tiup cluster scale-in tidbcluster --node 192.168.1.14:20160
[tidb@mysql1 ~]$ tiup cluster scale-in tidbcluster --node 192.168.1.14:20160
This operation will delete the 192.168.1.14:20160 nodes in `tidbcluster` and all their data.
Do you want to continue? [y/N]:(default=N) y
The component `[tikv]` will become tombstone, maybe exists in several minutes or hours, after that you can use the prune command to clean it
Do you want to continue? [y/N]:(default=N) y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [ Serial ] - ClusterOperate: operation=DestroyOperation, options={Roles:[] Nodes:[192.168.1.14:20160] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:600 IgnoreConfigCheck:false NativeSSH:false SSHType: Concurrency:5 SSHProxyHost: SSHProxyPort:22 SSHProxyUser:tidb SSHProxyIdentity:/home/tidb/.ssh/id_rsa SSHProxyUsePassword:false SSHProxyTimeout:5 SSHCustomScripts:{BeforeRestartInstance:{Raw:} AfterRestartInstance:{Raw:}} CleanupData:false CleanupLog:false CleanupAuditLog:false RetainDataRoles:[] RetainDataNodes:[] DisplayMode:default Operation:StartOperation}
The component `tikv` will become tombstone, maybe exists in several minutes or hours, after that you can use the prune command to clean it
+ [ Serial ] - UpdateMeta: cluster=tidbcluster, deleted=`''`
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ Refresh instance configs
  - Generate config pd -> 192.168.1.11:2379 ... Done
  - Generate config pd -> 192.168.1.12:2379 ... Done
  - Generate config pd -> 192.168.1.13:2379 ... Done
  - Generate config pd -> 192.168.1.14:2379 ... Done
  - Generate config tikv -> 192.168.1.11:20160 ... Done
  - Generate config tikv -> 192.168.1.12:20160 ... Done
  - Generate config tikv -> 192.168.1.13:20160 ... Done
  - Generate config tidb -> 192.168.1.11:4000 ... Done
  - Generate config tidb -> 192.168.1.12:4000 ... Done
  - Generate config tidb -> 192.168.1.13:4000 ... Done
  - Generate config tidb -> 192.168.1.14:4000 ... Done
  - Generate config prometheus -> 192.168.1.11:9090 ... Done
  - Generate config grafana -> 192.168.1.11:3000 ... Done
  - Generate config alertmanager -> 192.168.1.11:9093 ... Done
+ Reload prometheus and grafana
  - Reload prometheus -> 192.168.1.11:9090 ... Done
  - Reload grafana -> 192.168.1.11:3000 ... Done
Scaled cluster `tidbcluster` in successfully

3.收缩后检查 

mysql> select * from information_schema.cluster_info;
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+------------------+
| TYPE | INSTANCE           | STATUS_ADDRESS     | VERSION | GIT_HASH                                 | START_TIME                | UPTIME           |
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+------------------+
| tidb | 192.168.1.13:4000  | 192.168.1.13:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T13:03:56+08:00 | 26m15.033098424s |
| tidb | 192.168.1.14:4000  | 192.168.1.14:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T13:23:06+08:00 | 7m5.033104483s   |
| tidb | 192.168.1.11:4000  | 192.168.1.11:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T13:03:56+08:00 | 26m15.033106061s |
| tidb | 192.168.1.12:4000  | 192.168.1.12:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T13:03:56+08:00 | 26m15.033107332s |
| pd   | 192.168.1.13:2379  | 192.168.1.13:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:03:48+08:00 | 26m23.033108699s |
| pd   | 192.168.1.14:2379  | 192.168.1.14:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:23:00+08:00 | 7m11.033113191s  |
| pd   | 192.168.1.11:2379  | 192.168.1.11:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:03:48+08:00 | 26m23.033114526s |
| pd   | 192.168.1.12:2379  | 192.168.1.12:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:03:48+08:00 | 26m23.033115684s |
| tikv | 192.168.1.11:20160 | 192.168.1.11:20180 | 4.0.7   | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T13:03:50+08:00 | 26m21.033116837s |
| tikv | 192.168.1.12:20160 | 192.168.1.12:20180 | 4.0.7   | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T13:03:50+08:00 | 26m21.033117991s |
| tikv | 192.168.1.13:20160 | 192.168.1.13:20180 | 4.0.7   | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T13:03:50+08:00 | 26m21.033119126s |
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+------------------+
11 rows in set (0.02 sec)
--由此可见,果然收缩成功。

4.检查服务 

--tikv 服务还在。
[root@mysql4 ~]# ps -ef |grep server
tidb       8060      1  0 13:22 ?        00:00:04 bin/pd-server --name=pd-192.168.1.14-2379 --client-urls=http://0.0.0.0:2379 --advertise-client-urls=http://192.168.1.14:2379 --peer-urls=http://0.0.0.0:2380 --advertise-peer-urls=http://192.168.1.14:2380 --data-dir=/tidb/tidb-data/pd-2379 --join=http://192.168.1.11:2379,http://192.168.1.12:2379,http://192.168.1.13:2379 --config=conf/pd.toml --log-file=/tidb/tidb-deploy/pd-2379/log/pd.log
tidb       8166      1  1 13:23 ?        00:00:08 bin/tikv-server --addr 0.0.0.0:20160 --advertise-addr 192.168.1.14:20160 --status-addr 0.0.0.0:20180 --advertise-status-addr 192.168.1.14:20180 --pd 192.168.1.11:2379,192.168.1.12:2379,192.168.1.13:2379 --data-dir /tidb/tidb-data/tikv-20160 --config conf/tikv.toml --log-file /tidb/tidb-deploy/tikv-20160/log/tikv.log
tidb       8445      1  0 13:23 ?        00:00:03 bin/tidb-server -P 4000 --status=10080 --host=0.0.0.0 --advertise-address=192.168.1.14 --store=tikv --path=192.168.1.11:2379,192.168.1.12:2379,192.168.1.13:2379 --log-slow-query=/tidb/tidb-deploy/tidb-4000/log/tidb_slow_query.log --config=conf/tidb.toml --log-file=/tidb/tidb-deploy/tidb-4000/log/tidb.log
root      10019   2249  0 13:30 pts/0    00:00:00 grep --color=auto server
--重启集群,然后观察是否14节点上的tikv会自动关闭。

[tidb@mysql1 ~]$ tiup cluster stop tidbcluster
Will stop the cluster tidbcluster with nodes: , roles: .
Do you want to continue? [y/N]:(default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [ Serial ] - StopCluster
Stopping component alertmanager
	Stopping instance 192.168.1.11
	Stop alertmanager 192.168.1.11:9093 success
Stopping component grafana
	Stopping instance 192.168.1.11
	Stop grafana 192.168.1.11:3000 success
Stopping component prometheus
	Stopping instance 192.168.1.11
	Stop prometheus 192.168.1.11:9090 success
Stopping component tidb
	Stopping instance 192.168.1.14
	Stopping instance 192.168.1.11
	Stopping instance 192.168.1.12
	Stopping instance 192.168.1.13
	Stop tidb 192.168.1.14:4000 success
	Stop tidb 192.168.1.13:4000 success
	Stop tidb 192.168.1.12:4000 success
	Stop tidb 192.168.1.11:4000 success
Stopping component tikv
	Stopping instance 192.168.1.14
	Stopping instance 192.168.1.11
	Stopping instance 192.168.1.12
	Stopping instance 192.168.1.13
	Stop tikv 192.168.1.14:20160 success
	Stop tikv 192.168.1.11:20160 success
	Stop tikv 192.168.1.12:20160 success
	Stop tikv 192.168.1.13:20160 success
Stopping component pd
	Stopping instance 192.168.1.14
	Stopping instance 192.168.1.11
	Stopping instance 192.168.1.12
	Stopping instance 192.168.1.13
	Stop pd 192.168.1.11:2379 success
	Stop pd 192.168.1.14:2379 success
	Stop pd 192.168.1.12:2379 success
	Stop pd 192.168.1.13:2379 success
Stopping component node_exporter
	Stopping instance 192.168.1.11
	Stopping instance 192.168.1.12
	Stopping instance 192.168.1.13
	Stopping instance 192.168.1.14
	Stop 192.168.1.11 success
	Stop 192.168.1.14 success
	Stop 192.168.1.12 success
	Stop 192.168.1.13 success
Stopping component blackbox_exporter
	Stopping instance 192.168.1.11
	Stopping instance 192.168.1.12
	Stopping instance 192.168.1.13
	Stopping instance 192.168.1.14
	Stop 192.168.1.14 success
	Stop 192.168.1.11 success
	Stop 192.168.1.12 success
	Stop 192.168.1.13 success
Stopped cluster `tidbcluster` successfully
[tidb@mysql1 ~]$ 
[tidb@mysql1 ~]$ tiup cluster start tidbcluster
Starting cluster tidbcluster...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [ Serial ] - StartCluster
Starting component pd
	Starting instance 192.168.1.14:2379
	Starting instance 192.168.1.11:2379
	Starting instance 192.168.1.12:2379
	Starting instance 192.168.1.13:2379
	Start instance 192.168.1.14:2379 success
	Start instance 192.168.1.11:2379 success
	Start instance 192.168.1.12:2379 success
	Start instance 192.168.1.13:2379 success
Starting component tikv
	Starting instance 192.168.1.14:20160
	Starting instance 192.168.1.11:20160
	Starting instance 192.168.1.12:20160
	Starting instance 192.168.1.13:20160
	Start instance 192.168.1.12:20160 success
	Start instance 192.168.1.13:20160 success
	Start instance 192.168.1.11:20160 success


Error: failed to start tikv: failed to start: 192.168.1.14 tikv-20160.service, 
please check the instance''s log(/tidb/tidb-deploy/tikv-20160/log) for more detail.: 
timed out waiting for port 20160 to be started after 2m0s

Verbose debug logs has been written to /home/tidb/.tiup/logs/tiup-cluster-debug-2024-08-16-13-35-12.log.

--重启集群后,无法启动。
四个节点均为启动TIDB-SERVER; 
4号节点 TIKV 启动失败,导致集群启动失败,4号节点,我们已经将其下线。

5.清理 Tombstone 节点。清理死亡节点。

tiup cluster prune tidbcluster
[tidb@mysql1 conf]$ tiup cluster prune tidbcluster
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [ Serial ] - FindTomestoneNodes
Will destroy these nodes: [192.168.1.14:20160]
Do you confirm this action? [y/N]:(default=N) y
Start destroy Tombstone nodes: [192.168.1.14:20160] ...
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:600 IgnoreConfigCheck:false NativeSSH:false SSHType: Concurrency:5 SSHProxyHost: SSHProxyPort:22 SSHProxyUser:tidb SSHProxyIdentity:/home/tidb/.ssh/id_rsa SSHProxyUsePassword:false SSHProxyTimeout:5 SSHCustomScripts:{BeforeRestartInstance:{Raw:} AfterRestartInstance:{Raw:}} CleanupData:false CleanupLog:false CleanupAuditLog:false RetainDataRoles:[] RetainDataNodes:[] DisplayMode:default Operation:StartOperation}
Stopping component tikv
	Stopping instance 192.168.1.14
	Stop tikv 192.168.1.14:20160 success
Destroying component tikv
	Destroying instance 192.168.1.14
Destroy 192.168.1.14 finished
- Destroy tikv paths: [/etc/systemd/system/tikv-20160.service /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160/log /tidb/tidb-deploy/tikv-20160]
+ [ Serial ] - UpdateMeta: cluster=tidbcluster, deleted=`'192.168.1.14:20160'`
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [ Serial ] - FindTomestoneNodes
Will destroy these nodes: [192.168.1.14:20160]
Do you confirm this action? [y/N]:(default=N) y
Start destroy Tombstone nodes: [192.168.1.14:20160] ...
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:600 IgnoreConfigCheck:false NativeSSH:false SSHType: Concurrency:5 SSHProxyHost: SSHProxyPort:22 SSHProxyUser:tidb SSHProxyIdentity:/home/tidb/.ssh/id_rsa SSHProxyUsePassword:false SSHProxyTimeout:5 SSHCustomScripts:{BeforeRestartInstance:{Raw:} AfterRestartInstance:{Raw:}} CleanupData:false CleanupLog:false CleanupAuditLog:false RetainDataRoles:[] RetainDataNodes:[] DisplayMode:default Operation:StartOperation}
+ [ Serial ] - UpdateMeta: cluster=tidbcluster, deleted=`'192.168.1.14:20160'`
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
+ Refresh instance configs
  - Generate config pd -> 192.168.1.11:2379 ... Done
  - Generate config pd -> 192.168.1.12:2379 ... Done
  - Generate config pd -> 192.168.1.13:2379 ... Done
  - Generate config pd -> 192.168.1.14:2379 ... Done
  - Generate config tikv -> 192.168.1.11:20160 ... Done
  - Generate config tikv -> 192.168.1.12:20160 ... Done
  - Generate config tikv -> 192.168.1.13:20160 ... Done
  - Generate config tidb -> 192.168.1.11:4000 ... Done
  - Generate config tidb -> 192.168.1.12:4000 ... Done
  - Generate config tidb -> 192.168.1.13:4000 ... Done
  - Generate config tidb -> 192.168.1.14:4000 ... Done
  - Generate config prometheus -> 192.168.1.11:9090 ... Done
  - Generate config grafana -> 192.168.1.11:3000 ... Done
  - Generate config alertmanager -> 192.168.1.11:9093 ... Done
+ Reload prometheus and grafana
  - Reload prometheus -> 192.168.1.11:9090 ... Done
  - Reload grafana -> 192.168.1.11:3000 ... Done
+ [ Serial ] - RemoveTomestoneNodesInPD
Destroy success

--重新启动集群。
[tidb@mysql1 conf]$ tiup cluster start tidbcluster
Starting cluster tidbcluster...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [ Serial ] - StartCluster
Starting component pd
	Starting instance 192.168.1.14:2379
	Starting instance 192.168.1.11:2379
	Starting instance 192.168.1.12:2379
	Starting instance 192.168.1.13:2379
	Start instance 192.168.1.14:2379 success
	Start instance 192.168.1.12:2379 success
	Start instance 192.168.1.11:2379 success
	Start instance 192.168.1.13:2379 success
Starting component tikv
	Starting instance 192.168.1.13:20160
	Starting instance 192.168.1.11:20160
	Starting instance 192.168.1.12:20160
	Start instance 192.168.1.12:20160 success
	Start instance 192.168.1.11:20160 success
	Start instance 192.168.1.13:20160 success
Starting component tidb
	Starting instance 192.168.1.14:4000
	Starting instance 192.168.1.11:4000
	Starting instance 192.168.1.12:4000
	Starting instance 192.168.1.13:4000
	Start instance 192.168.1.14:4000 success
	Start instance 192.168.1.12:4000 success
	Start instance 192.168.1.11:4000 success
	Start instance 192.168.1.13:4000 success
Starting component prometheus
	Starting instance 192.168.1.11:9090
	Start instance 192.168.1.11:9090 success
Starting component grafana
	Starting instance 192.168.1.11:3000
	Start instance 192.168.1.11:3000 success
Starting component alertmanager
	Starting instance 192.168.1.11:9093
	Start instance 192.168.1.11:9093 success
Starting component node_exporter
	Starting instance 192.168.1.11
	Starting instance 192.168.1.12
	Starting instance 192.168.1.13
	Starting instance 192.168.1.14
	Start 192.168.1.14 success
	Start 192.168.1.12 success
	Start 192.168.1.13 success
	Start 192.168.1.11 success
Starting component blackbox_exporter
	Starting instance 192.168.1.11
	Starting instance 192.168.1.12
	Starting instance 192.168.1.13
	Starting instance 192.168.1.14
	Start 192.168.1.13 success
	Start 192.168.1.14 success
	Start 192.168.1.12 success
	Start 192.168.1.11 success
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
Started cluster `tidbcluster` successfully

mysql> select * from information_schema.cluster_info;
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+
| TYPE | INSTANCE           | STATUS_ADDRESS     | VERSION | GIT_HASH                                 | START_TIME                | UPTIME          |
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+
| tidb | 192.168.1.13:4000  | 192.168.1.13:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 27.639879212s   |
| tidb | 192.168.1.14:4000  | 192.168.1.14:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 27.639884307s   |
| tidb | 192.168.1.12:4000  | 192.168.1.12:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 27.639885673s   |
| tidb | 192.168.1.11:4000  | 192.168.1.11:10080 | 4.0.7   | ed939f3f11599b5a38352c5c160c917df3ebf3eb | 2024-08-16T14:04:52+08:00 | 27.639886905s   |
| pd   | 192.168.1.13:2379  | 192.168.1.13:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:59:54+08:00 | 5m25.639888192s |
| pd   | 192.168.1.14:2379  | 192.168.1.14:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:59:57+08:00 | 5m22.63988964s  |
| pd   | 192.168.1.11:2379  | 192.168.1.11:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T13:59:51+08:00 | 5m28.639891932s |
| pd   | 192.168.1.12:2379  | 192.168.1.12:2379  | 4.0.7   | 8b0348f545611d5955e32fdcf3c57a3f73657d77 | 2024-08-16T14:00:03+08:00 | 5m16.639893298s |
| tikv | 192.168.1.11:20160 | 192.168.1.11:20180 | 4.0.7   | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T14:00:27+08:00 | 4m52.639894473s |
| tikv | 192.168.1.12:20160 | 192.168.1.12:20180 | 4.0.7   | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T14:00:48+08:00 | 4m31.639895659s |
| tikv | 192.168.1.13:20160 | 192.168.1.13:20180 | 4.0.7   | bc0a9b3974f32cc2e08244a6eaf5284e5e5f4d76 | 2024-08-16T14:01:06+08:00 | 4m13.639896846s |
+------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+
11 rows in set (0.01 sec)

6.总结 

使用tidb 执行集群缩绒后,需要执行 prune 命令,将处于TombStone状态的节点清理掉,否则不成功。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值