一、测试环境
名称 | 值 |
---|---|
cpu | Intel® Core™ i5-1035G1 CPU @ 1.00GHz |
操作系统 | CentOS Linux release 7.9.2009 (Core) |
内存 | 4G |
逻辑核数 | 3 |
原有节点1-IP | 192.168.142.10 |
缩容节点2-IP | 192.168.142.11 |
数据库版本 | 8.6.2.43-R33.132743 |
二、操作步骤
这里演示的步骤是同时卸载管理和数据节点。
1、查看集群状态
[root@localhost gcinstall]# gcadmin
CLUSTER STATE: ACTIVE
CLUSTER MODE: NORMAL
=====================================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
=====================================================================
| NodeName | IpAddress |gcware |gcluster |DataState |
---------------------------------------------------------------------
| coordinator1 | 192.168.142.10 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
| coordinator2 | 192.168.142.11 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
=================================================================
| GBASE DATA CLUSTER INFORMATION |
=================================================================
|NodeName | IpAddress |gnode |syncserver |DataState |
-----------------------------------------------------------------
| node1 | 192.168.142.10 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
| node2 | 192.168.142.11 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
2、查看集群分布策略
[root@localhost gcinstall]# gcadmin showdistribution
Distribution ID: 2 | State: new | Total segment num: 2
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.142.11 | 1 | |
------------------------------------------------------------------------------------------------------------------------
| 192.168.142.10 | 2 | |
========================================================================================================================
记录一下分布ID:2后面需要用到。
3、修改策略配置文件
我们只留保留节点的信息,缩容节点的信息进行剔除。
[root@localhost gcinstall]# su - gbase
上一次登录:五 8月 12 09:21:47 CST 2022pts/4 上
[gbase@localhost ~]$ cd /opt/pkg/gcinstall/
[gbase@localhost gcinstall]$ vim gcChangeInfo.xml
[gbase@localhost gcinstall]$ cat gcChangeInfo.xml
<?xml version="1.0" encoding="utf-8"?>
<servers>
<rack>
<node ip="192.168.142.10"/>
</rack>
</servers>
4、策略重分布
新的策略ID是3。
[gbase@localhost gcinstall]$ gcadmin distribution gcChangeInfo.xml p 1 d 0
gcadmin generate distribution ...
[warning]: parameter [d num] is 0, the new distribution will has no segment backup
please ensure this is ok, input y or n: y
gcadmin generate distribution successful
[gbase@localhost gcinstall]$ gcadmin showdistribution
Distribution ID: 3 | State: new | Total segment num: 1
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.142.10 | 1 | |
========================================================================================================================
Distribution ID: 2 | State: old | Total segment num: 2
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.142.11 | 1 | |
------------------------------------------------------------------------------------------------------------------------
| 192.168.142.10 | 2 | |
========================================================================================================================
5、节点数据地图初始化
[gbase@localhost gcinstall]$ gccli
GBase client 8.6.2.43-R33.132743. Copyright (c) 2004-2022, GBase. All Rights Reserved.
gbase>
gbase> initnodedatamap;
Query OK, 0 rows affected (Elapsed: 00:00:00.85)
6、实例重分布
gbase> rebalance instance;
Query OK, 2 rows affected (Elapsed: 00:00:00.46)
7、查看数据重分布情况
gbase> select * from gclusterdb.rebalancing_status;
+------------------------------+------------+-------------------+----------+----------------------------+----------------------------+-----------+------------+----------+-----------------------+-----------------+
| index_name | db_name | table_name | tmptable | start_time | end_time | status | percentage | priority | host | distribution_id |
+------------------------------+------------+-------------------+----------+----------------------------+----------------------------+-----------+------------+----------+-----------------------+-----------------+
| gclusterdb.audit_log_express | gclusterdb | audit_log_express | | 2022-08-12 13:56:31.178000 | 2022-08-12 13:56:31.656000 | COMPLETED | 100 | 5 | ::ffff:192.168.142.10 | 3 |
| czg.czg | czg | czg | | 2022-08-12 13:56:31.169000 | 2022-08-12 13:56:31.645000 | COMPLETED | 100 | 5 | ::ffff:192.168.142.10 | 3 |
+------------------------------+------------+-------------------+----------+----------------------------+----------------------------+-----------+------------+----------+-----------------------+-----------------+
2 rows in set (Elapsed: 00:00:00.00)
8、查看是否有遗漏的表
如果有的话,进行rebalance table 库名.表名。
gbase> select * from gbase.table_distribution where data_distribution_id=2;
Empty set (Elapsed: 00:00:00.00)
9、删除旧的节点数据地图
gbase> refreshnodedatamap drop 2;
Query OK, 0 rows affected (Elapsed: 00:00:00.83)
10、删掉旧的分布策略
[gbase@localhost gcinstall]$ gcadmin rmdistribution 2
cluster distribution ID [2]
it will be removed now
please ensure this is ok, input y or n: y
gcadmin remove distribution [2] success
11、删除缩容完的数据节点注册信息
里面存放已经缩容完的数据节点信息。
[gbase@localhost gcinstall]$ cat gcChangeInfo.xml
<?xml version="1.0" encoding="utf-8"?>
<servers>
<rack>
<node ip="192.168.142.11"/>
</rack>
</servers>
[gbase@localhost gcinstall]$ gcadmin rmnodes gcChangeInfo.xml
gcadmin remove nodes ...
node [192.168.142.11] had been removed
gcadmin rmnodes success
12、停止所有节点服务
所有节点都要执行。
[root@localhost ~]# service gcware stop
Stopping GCMonit success!
Signaling GCRECOVER (gcrecover) to terminate: [ 确定 ]
Waiting for gcrecover services to unload:.... [ 确定 ]
Signaling GCSYNC (gc_sync_server) to terminate: [ 确定 ]
Waiting for gc_sync_server services to unload: [ 确定 ]
Signaling GCLUSTERD to terminate: [ 确定 ]
Waiting for gclusterd services to unload:........ [ 确定 ]
Signaling GBASED to terminate: [ 确定 ]
Waiting for gbased services to unload:............ [ 确定 ]
Signaling GCWARE (gcware) to terminate: [ 确定 ]
Waiting for gcware services to unload:.. [ 确定 ]
13、卸载缩容节点的数据库服务
coordinateHost 写要卸载的管理节点ip,
dataHost 写要卸载的数据节点ip,
existCoordinateHost写要保留的管理节点ip,
existDataHost写要保留的数据节点ip。
[gbase@localhost gcinstall]$ cat demo.options
installPrefix= /opt
coordinateHost = 192.168.142.11
coordinateHostNodeID = 234,235,237
dataHost = 192.168.142.11
existCoordinateHost = 192.168.142.10
existDataHost = 192.168.142.10
loginUser= root
loginUserPwd = 'qwer1234'
#loginUserPwdFile = loginUserPwd.json
dbaUser = gbase
dbaGroup = gbase
dbaPwd = 'gbase'
rootPwd = 'qwer1234'
#rootPwdFile = rootPwd.json
dbRootPwd = ''
#mcastAddr = 226.94.1.39
mcastPort = 5493
[gbase@localhost gcinstall]$ ./unInstall.py --silent=demo.options
These GCluster nodes will be uninstalled.
CoordinateHost:
192.168.142.11
DataHost:
192.168.142.11
Are you sure to uninstall GCluster ([Y,y]/[N,n])? y
192.168.142.11 UnInstall 192.168.142.11 successfully.
Update all coordinator corosync conf.
192.168.142.10 update corosync conf successfully.
14、各节点启动数据库服务
各节点都要执行。
[root@localhost ~]# service gcware start
Starting GCWARE (gcwexec): [ 确定 ]
Starting GBASED : [ 确定 ]
Starting GCSYNC : [ 确定 ]
Starting GCLUSTERD : [ 确定 ]
Starting GCRECOVER : [ 确定 ]
Starting GCMonit success!
15、查看集群是否缩容成功
只剩下一个节点,缩容成功。
[gbase@localhost gcinstall]$ gcadmin
CLUSTER STATE: ACTIVE
CLUSTER MODE: NORMAL
=====================================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
=====================================================================
| NodeName | IpAddress |gcware |gcluster |DataState |
---------------------------------------------------------------------
| coordinator1 | 192.168.142.10 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
=================================================================
| GBASE DATA CLUSTER INFORMATION |
=================================================================
|NodeName | IpAddress |gnode |syncserver |DataState |
-----------------------------------------------------------------
| node1 | 192.168.142.10 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
[gbase@localhost gcinstall]$ gcadmin showdistribution
Distribution ID: 3 | State: new | Total segment num: 1
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.142.10 | 1 | |
========================================================================================================================
三、小知识点
1、只缩容数据节点
如果大家只缩容数据节点,在二、13中只写dataHost,不写coordinateHost ,其他的内容和上面的例子不变。
2、只缩容管理节点
(1)查看集群状态
[root@localhost ~]# gcadmin
CLUSTER STATE: ACTIVE
CLUSTER MODE: NORMAL
=====================================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
=====================================================================
| NodeName | IpAddress |gcware |gcluster |DataState |
---------------------------------------------------------------------
| coordinator1 | 192.168.142.10 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
| coordinator2 | 192.168.142.11 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
=================================================================
| GBASE DATA CLUSTER INFORMATION |
=================================================================
|NodeName | IpAddress |gnode |syncserver |DataState |
-----------------------------------------------------------------
| node1 | 192.168.142.10 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
| node2 | 192.168.142.11 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
(2)停止集群
各节点执行
[root@localhost ~]# service gcware stop
Stopping GCMonit success!
Signaling GCRECOVER (gcrecover) to terminate: [ 确定 ]
Waiting for gcrecover services to unload:... [ 确定 ]
Signaling GCSYNC (gc_sync_server) to terminate: [ 确定 ]
Waiting for gc_sync_server services to unload: [ 确定 ]
Signaling GCLUSTERD to terminate: [ 确定 ]
Waiting for gclusterd services to unload:..... [ 确定 ]
Signaling GBASED to terminate: [ 确定 ]
Waiting for gbased services to unload:.. [ 确定 ]
Signaling GCWARE (gcware) to terminate: [ 确定 ]
Waiting for gcware services to unload:. [ 确定 ]
(3)修改配置文件
注释dataHost。
coordinateHost 写需要卸载的管理节点IP。
existCoordinateHost 写需要存在的管理节点IP。
existDataHost 写需要存在的数据节点IP。
[gbase@localhost gcinstall]$ cat demo.options
installPrefix= /opt
coordinateHost = 192.168.142.11
coordinateHostNodeID = 234,235,237
#dataHost = 192.168.142.11
existCoordinateHost = 192.168.142.10
existDataHost = 192.168.142.10,192.168.142.11
loginUser= root
loginUserPwd = 'qwer1234'
#loginUserPwdFile = loginUserPwd.json
dbaUser = gbase
dbaGroup = gbase
dbaPwd = 'gbase'
rootPwd = 'qwer1234'
#rootPwdFile = rootPwd.json
dbRootPwd = ''
#mcastAddr = 226.94.1.39
mcastPort = 5493
(4)卸载管理节点
[gbase@localhost gcinstall]$ ./unInstall.py --silent=demo.options
These GCluster nodes will be uninstalled.
CoordinateHost:
192.168.142.11
DataHost:
Are you sure to uninstall GCluster ([Y,y]/[N,n])? y
192.168.142.11 UnInstall 192.168.142.11 successfully.
Update all coordinator corosync conf.
192.168.142.10 update corosync conf successfully.
(5)启动集群服务
每个节点都执行。
[root@localhost gcinstall]# service gcware start
Starting GCWARE (gcwexec): [ 确定 ]
Starting GBASED : [ 确定 ]
Starting GCSYNC : [ 确定 ]
Starting GCLUSTERD : [ 确定 ]
Starting GCRECOVER : [ 确定 ]
Starting GCMonit success!
(6)查看集群状态
管理节点变为一个,卸载成功。
[root@localhost gcinstall]# gcadmin
CLUSTER STATE: ACTIVE
CLUSTER MODE: NORMAL
=====================================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
=====================================================================
| NodeName | IpAddress |gcware |gcluster |DataState |
---------------------------------------------------------------------
| coordinator1 | 192.168.142.10 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
=================================================================
| GBASE DATA CLUSTER INFORMATION |
=================================================================
|NodeName | IpAddress |gnode |syncserver |DataState |
-----------------------------------------------------------------
| node1 | 192.168.142.10 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
| node2 | 192.168.142.11 | OPEN | OPEN | 0 |
-----------------------------------------------------------------