1.检查当前集群
mysql> select * from information_schema.cluster_info;
+---------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+-----------+
| TYPE | INSTANCE | STATUS_ADDRESS | VERSION | GIT_HASH | START_TIME | UPTIME | SERVER_ID |
+---------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+-----------+
| tidb | 192.168.1.11:4000 | 192.168.1.11:10080 | 5.1.1 | 797bddd25310ed42f0791c8eccb78be8cce2f502 | 2024-08-16T17:45:16+08:00 | 4m11.0186123s | 0 |
| tidb | 192.168.1.13:4000 | 192.168.1.13:10080 | 5.1.1 | 797bddd25310ed42f0791c8eccb78be8cce2f502 | 2024-08-16T17:45:16+08:00 | 4m11.018618394s | 0 |
| tidb | 192.168.1.12:4001 | 192.168.1.12:10081 | 5.1.1 | 797bddd25310ed42f0791c8eccb78be8cce2f502 | 2024-08-16T17:45:16+08:00 | 4m11.018619939s | 0 |
| pd | 192.168.1.13:2379 | 192.168.1.13:2379 | 5.1.1 | 7cba1912b317a533e18b16ea2ba9a14ed2891129 | 2024-08-16T17:45:02+08:00 | 4m25.018621305s | 0 |
| pd | 192.168.1.11:2379 | 192.168.1.11:2379 | 5.1.1 | 7cba1912b317a533e18b16ea2ba9a14ed2891129 | 2024-08-16T17:45:02+08:00 | 4m25.018624355s | 0 |
| pd | 192.168.1.12:2379 | 192.168.1.12:2379 | 5.1.1 | 7cba1912b317a533e18b16ea2ba9a14ed2891129 | 2024-08-16T17:45:02+08:00 | 4m25.018625724s | 0 |
| tikv | 192.168.1.12:20161 | 192.168.1.12:20181 | 5.1.1 | 4705d7c6e9c42d129d3309e05911ec6b08a25a38 | 2024-08-16T17:45:06+08:00 | 4m21.018627049s | 0 |
| tikv | 192.168.1.13:20160 | 192.168.1.13:20180 | 5.1.1 | 4705d7c6e9c42d129d3309e05911ec6b08a25a38 | 2024-08-16T17:45:06+08:00 | 4m21.018628342s | 0 |
| tikv | 192.168.1.11:20160 | 192.168.1.11:20180 | 5.1.1 | 4705d7c6e9c42d129d3309e05911ec6b08a25a38 | 2024-08-16T17:45:06+08:00 | 4m21.018629662s | 0 |
| tiflash | 192.168.1.11:3930 | 192.168.1.11:20292 | v5.1.1 | c8fabfb50fe28db17cc5118133a69be255c40efd | 2024-08-16T17:45:27+08:00 | 4m0.01863095s | 0 |
| tiflash | 192.168.1.14:3930 | 192.168.1.14:20292 | v5.1.1 | c8fabfb50fe28db17cc5118133a69be255c40efd | 2024-08-16T17:45:27+08:00 | 4m0.018632361s | 0 |
+---------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+-----------+
11 rows in set (0.01 sec)
--当前有两个tiflash节点,删除一个,留一个。
2.设置tiflash中表的副本。
--检查tiflash中的表
mysql> SELECT TABLE_SCHEMA, TABLE_NAME FROM information_schema.tiflash_replica;
+--------------+------------+
| TABLE_SCHEMA | TABLE_NAME |
+--------------+------------+
| test | my_test2 |
+--------------+------------+
1 row in set (0.00 sec)
mysql> alter table test.my_test2 set tiflash replica 0;
Query OK, 0 rows affected (0.11 sec)
3.删除一个tiflash节点
--tiflash的端口是:9000;--另外一个tiflash在14节点上。
tiup cluster scale-in tidbcluster --node 192.168.1.14:9000
[tidb@mysql1 ~]$ tiup cluster scale-in tidbcluster --node 192.168.1.14:9000
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.5.3/tiup-cluster scale-in tidbcluster --node 192.168.1.14:9000
This operation will delete the 192.168.1.14:9000 nodes in `tidbcluster` and all their data.
Do you want to continue? [y/N]:(default=N) y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[192.168.1.14:9000] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[] ShowUptime:false JSON:false Operation:StartOperation}
The component `tiflash` will become tombstone, maybe exists in several minutes or hours, after that you can use the prune command to clean it
+ [ Serial ] - UpdateMeta: cluster=tidbcluster, deleted=`''`
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
+ Refresh instance configs
- Regenerate config pd -> 192.168.1.11:2379 ... Done
- Regenerate config pd -> 192.168.1.12:2379 ... Done
- Regenerate config pd -> 192.168.1.13:2379 ... Done
- Regenerate config tikv -> 192.168.1.11:20160 ... Done
- Regenerate config tikv -> 192.168.1.12:20161 ... Done
- Regenerate config tikv -> 192.168.1.13:20160 ... Done
- Regenerate config tidb -> 192.168.1.11:4000 ... Done
- Regenerate config tidb -> 192.168.1.12:4001 ... Done
- Regenerate config tidb -> 192.168.1.13:4000 ... Done
- Regenerate config tiflash -> 192.168.1.11:9000 ... Done
- Regenerate config cdc -> 192.168.1.11:8300 ... Done
- Regenerate config prometheus -> 192.168.1.11:9090 ... Done
- Regenerate config grafana -> 192.168.1.11:3000 ... Done
- Regenerate config alertmanager -> 192.168.1.11:9093 ... Done
- Regenerate config tispark -> 192.168.1.11:7077 ... Done
- Regenerate config tispark -> 192.168.1.11:7078 ... Done
+ [ Serial ] - SystemCtl: host=192.168.1.11 action=reload prometheus-9090.service
Scaled cluster `tidbcluster` in successfully
4.检查
mysql> select * from information_schema.cluster_info;
+---------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+-----------+
| TYPE | INSTANCE | STATUS_ADDRESS | VERSION | GIT_HASH | START_TIME | UPTIME | SERVER_ID |
+---------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+-----------+
| tidb | 192.168.1.13:4000 | 192.168.1.13:10080 | 5.1.1 | 797bddd25310ed42f0791c8eccb78be8cce2f502 | 2024-08-16T17:45:16+08:00 | 9m53.593458308s | 0 |
| tidb | 192.168.1.12:4001 | 192.168.1.12:10081 | 5.1.1 | 797bddd25310ed42f0791c8eccb78be8cce2f502 | 2024-08-16T17:45:16+08:00 | 9m53.593464299s | 0 |
| tidb | 192.168.1.11:4000 | 192.168.1.11:10080 | 5.1.1 | 797bddd25310ed42f0791c8eccb78be8cce2f502 | 2024-08-16T17:45:16+08:00 | 9m53.593465809s | 0 |
| pd | 192.168.1.13:2379 | 192.168.1.13:2379 | 5.1.1 | 7cba1912b317a533e18b16ea2ba9a14ed2891129 | 2024-08-16T17:45:02+08:00 | 10m7.5934672s | 0 |
| pd | 192.168.1.11:2379 | 192.168.1.11:2379 | 5.1.1 | 7cba1912b317a533e18b16ea2ba9a14ed2891129 | 2024-08-16T17:45:02+08:00 | 10m7.593468658s | 0 |
| pd | 192.168.1.12:2379 | 192.168.1.12:2379 | 5.1.1 | 7cba1912b317a533e18b16ea2ba9a14ed2891129 | 2024-08-16T17:45:02+08:00 | 10m7.593470062s | 0 |
| tikv | 192.168.1.12:20161 | 192.168.1.12:20181 | 5.1.1 | 4705d7c6e9c42d129d3309e05911ec6b08a25a38 | 2024-08-16T17:45:06+08:00 | 10m3.593471403s | 0 |
| tikv | 192.168.1.13:20160 | 192.168.1.13:20180 | 5.1.1 | 4705d7c6e9c42d129d3309e05911ec6b08a25a38 | 2024-08-16T17:45:06+08:00 | 10m3.593472704s | 0 |
| tikv | 192.168.1.11:20160 | 192.168.1.11:20180 | 5.1.1 | 4705d7c6e9c42d129d3309e05911ec6b08a25a38 | 2024-08-16T17:45:06+08:00 | 10m3.593474043s | 0 |
| tiflash | 192.168.1.11:3930 | 192.168.1.11:20292 | v5.1.1 | c8fabfb50fe28db17cc5118133a69be255c40efd | 2024-08-16T17:45:27+08:00 | 9m42.593475421s | 0 |
+---------+--------------------+--------------------+---------+------------------------------------------+---------------------------+-----------------+-----------+
10 rows in set (0.03 sec)
--果然,只有一个tiflash了。
mysql> SELECT TABLE_SCHEMA, TABLE_NAME FROM information_schema.tiflash_replica;
Empty set (0.01 sec)
mysql> alter table test.my_test2 set tiflash replica 1;
Query OK, 0 rows affected (0.09 sec)
mysql> SELECT TABLE_SCHEMA, TABLE_NAME FROM information_schema.tiflash_replica;
+--------------+------------+
| TABLE_SCHEMA | TABLE_NAME |
+--------------+------------+
| test | my_test2 |
+--------------+------------+
1 row in set (0.00 sec)