南大通用数据库-Gbase-8a-学习-06-集群节点扩容(8.6版本)

一、测试环境

名称
cpuIntel® Core™ i5-1035G1 CPU @ 1.00GHz
操作系统CentOS Linux release 7.9.2009 (Core)
内存4G
逻辑核数3
原有节点1-IP192.168.142.10
扩容节点2-IP192.168.142.11
数据库版本8.6.2.43-R33.132743

二、操作步骤

1、查看集群状态

[root@localhost ~]# gcadmin
CLUSTER STATE:  ACTIVE
CLUSTER MODE:   NORMAL

=====================================================================
|               GBASE COORDINATOR CLUSTER INFORMATION               |
=====================================================================
|   NodeName   |       IpAddress       |gcware |gcluster |DataState |
---------------------------------------------------------------------
| coordinator1 |    192.168.142.10     | OPEN  |  OPEN   |    0     |
---------------------------------------------------------------------
=================================================================
|                GBASE DATA CLUSTER INFORMATION                 |
=================================================================
|NodeName |       IpAddress       |gnode |syncserver |DataState |
-----------------------------------------------------------------
|  node1  |    192.168.142.10     | OPEN |   OPEN    |    0     |
-----------------------------------------------------------------

2、关闭集群

由于我们需要添加管理节点,所以必须要关闭所有节点服务,如果只添加数据节点,可以不停止服务。添加管理节点,不停服务,报错如下:

some gcluster process still running on host 192.168.142.10, use 'pidof gclusterd gbased corosync gcmonit gcrecover gc_sync_server;' to check.
Must stop all gcluster nodes before extend gcluster. you can search 'still running' in gcinstall.log to find them.

每个节点都需要执行。

[root@localhost ~]# service gcware stop
Stopping GCMonit success!
Signaling GCRECOVER (gcrecover) to terminate:              [  确定  ]
Waiting for gcrecover services to unload:....              [  确定  ]
Signaling GCSYNC (gc_sync_server) to terminate:            [  确定  ]
Waiting for gc_sync_server services to unload:             [  确定  ]
Signaling GCLUSTERD  to terminate:                         [  确定  ]
Waiting for gclusterd services to unload:........          [  确定  ]
Signaling GBASED  to terminate:                            [  确定  ]
Waiting for gbased services to unload:....                 [  确定  ]
Signaling GCWARE (gcware) to terminate:                    [  确定  ]
Waiting for gcware services to unload:.                    [  确定  ]

3、查看集群进程是否关闭

每个节点都可以看一下,建议看,不看也OK。

[root@localhost ~]# ps -ef|grep gbase
root       4177   3591  0 16:43 pts/0    00:00:00 grep --color=auto gbase

4、修改配置文件demo.options

[root@localhost gcluster]# cd /opt/pkg/gcinstall/

[root@localhost gcinstall]# ll
总用量 93272
-rwxrwxrwx. 1 root  root       435 87 20:27 192.168.142.10.options
-rwxrwxrwx. 1 root  root       435 87 20:27 192.168.142.11.options
-rw-r--r--. 1 gbase gbase      292 1217 2021 BUILDINFO
-rw-r--r--. 1 gbase gbase  2249884 1217 2021 bundle_data.tar.bz2
-rw-r--r--. 1 gbase gbase 87478657 1217 2021 bundle.tar.bz2
-rw-r--r--. 1 gbase gbase     1951 1217 2021 CGConfigChecker.py
-rw-r--r--. 1 root  root      1895 87 20:27 CGConfigChecker.pyc
-rw-r--r--. 1 gbase gbase      309 1217 2021 cluster.conf
-rwxr-xr-x. 1 gbase gbase     4167 1217 2021 CorosyncConf.py
-rw-r--r--. 1 gbase gbase      420 87 20:26 demo.options
-rw-r--r--. 1 gbase gbase      154 1217 2021 dependRpms
-rw-r--r--. 1 gbase gbase      684 1217 2021 example.xml
-rwxr-xr-x. 1 gbase gbase      419 1217 2021 extendCfg.xml
-rw-r--r--. 1 gbase gbase      781 1217 2021 FileCheck.py
-rw-r--r--. 1 root  root      1173 87 20:27 FileCheck.pyc
-rw-r--r--. 1 gbase gbase     2700 1217 2021 fulltext.py
-rw-r--r--. 1 gbase gbase  4818440 1217 2021 gbase_data_timezone.sql
-rw-r--r--. 1 gbase gbase      137 87 20:29 gcChangeInfo.xml
-rwxrw-rw-. 1 root  root     13109 87 20:29 gcinstall.log
-rwxr-xr-x. 1 gbase gbase    76282 1217 2021 gcinstall.py
-rwxrwxrwx. 1 gbase gbase     3362 1217 2021 GetOSType.py
-rw-r--r--. 1 gbase gbase   156505 1217 2021 InstallFuns.py
-rw-r--r--. 1 root  root    126295 87 20:27 InstallFuns.pyc
-rw-r--r--. 1 gbase gbase   237364 1217 2021 InstallTar.py
-rw-r--r--. 1 gbase gbase     1114 1217 2021 license.txt
-rwxr-xr-x. 1 gbase gbase      296 1217 2021 loginUserPwd.json
-rwxr-xr-x. 1 gbase gbase    75990 1217 2021 pexpect.py
-rw-r--r--. 1 root  root     63064 87 20:27 pexpect.pyc
-rwxr-xr-x. 1 gbase gbase    25093 1217 2021 replace.py
-rw-r--r--. 1 gbase gbase     1715 1217 2021 RestoreLocal.py
-rwxr-xr-x. 1 gbase gbase     6622 1217 2021 Restore.py
-rw-r--r--. 1 gbase gbase     7312 1217 2021 rmt.py
-rw-r--r--. 1 root  root      5625 87 20:27 rmt.pyc
-rwxr-xr-x. 1 gbase gbase      296 1217 2021 rootPwd.json
-rw-r--r--. 1 gbase gbase     2717 1217 2021 SSHThread.py
-rw-r--r--. 1 root  root      3823 87 20:27 SSHThread.pyc
-rwxr-xr-x. 1 gbase gbase    21710 1217 2021 unInstall.py
-rw-r--r--. 1 root  root     17079 87 20:27 unInstall.pyc

[root@localhost gcinstall]# cat demo.options 
installPrefix= /opt
coordinateHost = 192.168.142.11
coordinateHostNodeID = 234,235,237
dataHost = 192.168.142.11
existCoordinateHost = 192.168.142.10
existDataHost = 192.168.142.10
loginUser= root
loginUserPwd = 'qwer1234'
#loginUserPwdFile = loginUserPwd.json
dbaUser = gbase
dbaGroup = gbase
dbaPwd = 'gbase'
rootPwd = 'qwer1234'
#rootPwdFile = rootPwd.json
dbRootPwd = ''
#mcastAddr = 226.94.1.39
mcastPort = 5493

5、安装数据库服务

[root@localhost gcinstall]# su - gbase
上一次登录:五 812 09:10:09 CST 2022pts/2 上

[gbase@localhost gcinstall]$ ./gcinstall.py --silent=demo.options
*********************************************************************************
Thank you for choosing GBase product!


Please read carefully the following licencing agreement before installing GBase product:
TIANJIN GENERAL DATA TECHNOLOGY CO., LTD. LICENSE AGREEMENT
 
READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED SUPPLEMENTAL LICENSETERMS (COLLECTIVELY "AGREEMENT") CAREFULLY BEFORE OPENING THE SOFTWAREMEDIA PACKAGE.  BY OPENING THE SOFTWARE MEDIA PACKAGE, YOU AGREE TO THE TERMS OF THIS AGREEMENT.  IF YOU ARE ACCESSING THE SOFTWARE ELECTRONICALLY, INDICATE YOUR ACCEPTANCE OF THESE TERMS.  IF YOU DO NOT AGREE TO ALL THESE TERMS, PROMPTLY RETURN THE UNUSED SOFTWARE TO YOUR PLACE OF PURCHASE FOR A REFUND.
 

1.  CHINESE GOVERNMENT RESTRICTED.  If Software is being acquired by or on behalf of the Chinese Government , then the Government's rights in Software and accompanying documentation will be only as set forth in this Agreement.
 
2.  GOVERNING LAW.  Any action related to this Agreement will be governed by Chinese law: "COPYRIGHT LAW OF THE PEOPLE'S REPUBLIC OF CHINA","PATENT LAW OF THE PEOPLE'S REPUBLIC OF CHINA","TRADEMARK LAW OF THE PEOPLE'S REPUBLIC OF CHINA","COMPUTER SOFTWARE PROTECTION REGULATIONS OF THE PEOPLE'S REPUBLIC OF CHINA".  No choice of law rules of any jurisdiction will apply."
 

*********************************************************************************
Do you accept the above licence agreement ([Y,y]/[N,n])? y
*********************************************************************************
                     Welcome to install GBase products
*********************************************************************************
Environmental Checking on gcluster nodes.
CoordinateHost:
192.168.142.11
DataHost:
192.168.142.11
Are you sure to install GCluster on these nodes ([Y,y]/[N,n])? y
192.168.142.11       	Start install on host 192.168.142.11
192.168.142.10       	Start install on host 192.168.142.10
192.168.142.11       	mkdir /opt_prepare on host 192.168.142.11.
192.168.142.10       	mkdir /opt_prepare on host 192.168.142.10.
192.168.142.11       	Copying InstallTar.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying InstallTar.py to host 192.168.142.10:/opt_prepare
192.168.142.11       	Copying InstallFuns.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying rmt.py to host 192.168.142.10:/opt_prepare
192.168.142.11       	Copying SSHThread.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying SSHThread.py to host 192.168.142.10:/opt_prepare
192.168.142.11       	Copying RestoreLocal.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying RestoreLocal.py to host 192.168.142.10:/opt_prepare
192.168.142.11       	Copying pexpect.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying pexpect.py to host 192.168.142.10:/opt_prepare
192.168.142.11       	Copying bundle.tar.bz2 to host 192.168.142.11:/opt_prepare
192.168.142.10       	Updating corosync configure files.
192.168.142.11       	Copying bundle.tar.bz2 to host 192.168.142.11:/opt_prepare
192.168.142.10       	Updating corosync configure files.
192.168.142.11       	Copying bundle_data.tar.bz2 to host 192.168.142.11:/opt_prepare
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Installing gcluster.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
Update and sync configuration file...
Starting all gcluster nodes...
Sync coordinator system tables...
check database password ...
check database password successful
check rsync command status
use rsync command sync metadata
Adding new datanodes to gcware...
ExtendCluster Successfully

6、查看集群当前状态

[gbase@localhost gcinstall]$ gcadmin 
CLUSTER STATE:  ACTIVE
CLUSTER MODE:   NORMAL

=====================================================================
|               GBASE COORDINATOR CLUSTER INFORMATION               |
=====================================================================
|   NodeName   |       IpAddress       |gcware |gcluster |DataState |
---------------------------------------------------------------------
| coordinator1 |    192.168.142.10     | OPEN  |  OPEN   |    0     |
---------------------------------------------------------------------
| coordinator2 |    192.168.142.11     | OPEN  |  OPEN   |    0     |
---------------------------------------------------------------------
=================================================================
|                GBASE DATA CLUSTER INFORMATION                 |
=================================================================
|NodeName |       IpAddress       |gnode |syncserver |DataState |
-----------------------------------------------------------------
|  node1  |    192.168.142.10     | OPEN |   OPEN    |    0     |
-----------------------------------------------------------------
|  node2  |    192.168.142.11     | OPEN |   OPEN    |    0     |
-----------------------------------------------------------------
[gbase@localhost gcinstall]$ gcadmin showdistribution

              Distribution ID: 1 | State: new | Total segment num: 1

     Primary Segment Node IP                           Segment ID         Duplicate Segment node IP
========================================================================================================================
|    192.168.142.10                              |       1          |                                                  |
========================================================================================================================

我们可以看到管理节点已经添加成功,数据节点还没有添加分布策略,原来的分布策略ID为1。

7、配置新的分布策略

[gbase@localhost gcinstall]$ cat gcChangeInfo.xml 
<?xml version="1.0" encoding="utf-8"?>
<servers>
 <rack>
  <node ip="192.168.142.11"/>
  <node ip="192.168.142.10"/>
 </rack>
</servers>

[gbase@localhost gcinstall]$  gcadmin distribution gcChangeInfo.xml p 1 d 0
gcadmin generate distribution ...

[warning]: parameter [d num] is 0, the new distribution will has no segment backup
please ensure this is ok, input y or n: y
NOTE: node [192.168.142.11] is coordinator node, it shall be data node too
copy system table from 192.168.142.10 to 192.168.142.11
source ip: 192.168.142.10
target ip: 192.168.142.11
gcadmin generate distribution successful

[gbase@localhost gcinstall]$ gcadmin showdistribution

              Distribution ID: 2 | State: new | Total segment num: 2

     Primary Segment Node IP                           Segment ID         Duplicate Segment node IP
========================================================================================================================
|    192.168.142.11                              |       1          |                                                  |
------------------------------------------------------------------------------------------------------------------------
|    192.168.142.10                              |       2          |                                                  |
========================================================================================================================

              Distribution ID: 1 | State: old | Total segment num: 1

     Primary Segment Node IP                           Segment ID         Duplicate Segment node IP
========================================================================================================================
|    192.168.142.10                              |       1          |                                                  |
========================================================================================================================

我们可以看到现在有两个分布策略,后续我们删除老的分布策略。

8、初始化

[gbase@localhost gcinstall]$ gccli

GBase client 8.6.2.43-R33.132743. Copyright (c) 2004-2022, GBase.  All Rights Reserved.

gbase> initnodedatamap;
Query OK, 0 rows affected, 1 warning (Elapsed: 00:00:00.76)

9、关闭并行度

如果有需要调整优先级,做此操作。

gbase> set global gcluster_rebalancing_concurrent_count = 0;
Query OK, 0 rows affected, 1 warning (Elapsed: 00:00:00.01)

10、重分布

重分布支持instance, database、table三种等级。

gbase> rebalance instance;
Query OK, 1 row affected (Elapsed: 00:00:00.25)

11、调整优先级

根据现场实际情况来调整,也可以不调整。

(1)查看重分布状态
gbase> select * from  gclusterdb.rebalancing_status;
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
| index_name | db_name | table_name | tmptable | start_time                 | end_time | status   | percentage | priority | host | distribution_id |
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
| czg.czg    | czg     | czg        | NULL     | 2022-08-12 10:04:58.762000 | NULL     | STARTING |          0 |        5 | NULL |               2 |
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
1 row in set (Elapsed: 00:00:00.01)
(2)调整优先度

priority值小的,优先级高,改完之后,需加大并行度。

gbase> update gclusterdb.rebalancing_status set priority = 3 where index_name like 'czg.czg';
Query OK, 1 row affected (Elapsed: 00:00:00.18)
Rows matched: 1  Changed: 1  Warnings: 0

gbase> select * from  gclusterdb.rebalancing_status;
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
| index_name | db_name | table_name | tmptable | start_time                 | end_time | status   | percentage | priority | host | distribution_id |
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
| czg.czg    | czg     | czg        | NULL     | 2022-08-12 10:04:58.762000 | NULL     | STARTING |          0 |        3 | NULL |               2 |
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
1 row in set (Elapsed: 00:00:00.01)

gbase> set global gcluster_rebalancing_concurrent_count = 5;
Query OK, 0 rows affected, 1 warning (Elapsed: 00:00:00.00)

12、查看重分布完成度

gbase> select * from  gclusterdb.rebalancing_status;
+------------+---------+------------+----------+----------------------------+----------------------------+-----------+------------+----------+-----------------------+-----------------+
| index_name | db_name | table_name | tmptable | start_time                 | end_time                   | status    | percentage | priority | host                  | distribution_id |
+------------+---------+------------+----------+----------------------------+----------------------------+-----------+------------+----------+-----------------------+-----------------+
| czg.czg    | czg     | czg        |          | 2022-08-12 10:33:59.337000 | 2022-08-12 10:33:59.654000 | COMPLETED |        100 |        3 | ::ffff:192.168.142.10 |               2 |
+------------+---------+------------+----------+----------------------------+----------------------------+-----------+------------+----------+-----------------------+-----------------+
1 row in set (Elapsed: 00:00:00.01)

13、查看是否有表使用老的策略

gbase> select * from gbase.table_distribution where data_distribution_id=1;
Empty set (Elapsed: 00:00:00.01)

如果有的话,请使用rebalance table 库名.表名,重新分布表。

14、删除老的刷新节点数据地图

gbase> refreshnodedatamap drop 1;
Query OK, 0 rows affected, 1 warning (Elapsed: 00:00:01.00)

15、删除老的分布策略

[gbase@localhost gcinstall]$ gcadmin rmdistribution 1
cluster distribution ID [1]
it will be removed now
please ensure this is ok, input y or n: y
gcadmin remove distribution [1] success

16、查看最终结果

各节点状态正常,分布策略是最新的,说明我们扩容成功啦。

[gbase@localhost gcinstall]$ gcadmin
CLUSTER STATE:  ACTIVE
CLUSTER MODE:   NORMAL

=====================================================================
|               GBASE COORDINATOR CLUSTER INFORMATION               |
=====================================================================
|   NodeName   |       IpAddress       |gcware |gcluster |DataState |
---------------------------------------------------------------------
| coordinator1 |    192.168.142.10     | OPEN  |  OPEN   |    0     |
---------------------------------------------------------------------
| coordinator2 |    192.168.142.11     | OPEN  |  OPEN   |    0     |
---------------------------------------------------------------------
=================================================================
|                GBASE DATA CLUSTER INFORMATION                 |
=================================================================
|NodeName |       IpAddress       |gnode |syncserver |DataState |
-----------------------------------------------------------------
|  node1  |    192.168.142.10     | OPEN |   OPEN    |    0     |
-----------------------------------------------------------------
|  node2  |    192.168.142.11     | OPEN |   OPEN    |    0     |
-----------------------------------------------------------------

[gbase@localhost gcinstall]$ gcadmin showdistribution

              Distribution ID: 2 | State: new | Total segment num: 2

     Primary Segment Node IP                           Segment ID         Duplicate Segment node IP
========================================================================================================================
|    192.168.142.11                              |       1          |                                                  |
------------------------------------------------------------------------------------------------------------------------
|    192.168.142.10                              |       2          |                                                  |
========================================================================================================================

三、踩坑点-db_root_pwd is not input, connect gncli failed

1、问题描述

重分布时,由于root密码不为空,所导致的问题。

[gbase@xdw0 gcinstall]$  gcadmin distribution gcChangeInfo.xml p 1 d 1
gcadmin generate distribution ...


db_root_pwd is not input, connect gncli failed
please input db_root_pwd and check gbased is running
gcadmin generate distribution failed

2、解决方法

添加参数:db_root_pwd

[gbase@xdw0 gcinstall]$  gcadmin distribution gcChangeInfo.xml p 1 d 1 db_root_pwd 'qwer1234'
gcadmin generate distribution ...

copy system table from 192.168.142.10 to 192.168.142.12
source ip: 192.168.142.10
target ip: 192.168.142.12
gcadmin generate distribution successful

四、踩坑点-HA event monitor thread call gcClmClusterTrack function fail(未解决)

之前我是两节点扩展到三节点,每个节点是3G,2个逻辑核,扩容出现的问题。

[root@localhost ~]# su - gbase
上一次登录:四 811 17:29:07 CST 2022pts/6 上

[gbase@localhost gcinstall]$ ./gcinstall.py --silent=demo.options
*********************************************************************************
Thank you for choosing GBase product!


Please read carefully the following licencing agreement before installing GBase product:
TIANJIN GENERAL DATA TECHNOLOGY CO., LTD. LICENSE AGREEMENT
 
READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED SUPPLEMENTAL LICENSETERMS (COLLECTIVELY "AGREEMENT") CAREFULLY BEFORE OPENING THE SOFTWAREMEDIA PACKAGE.  BY OPENING THE SOFTWARE MEDIA PACKAGE, YOU AGREE TO THE TERMS OF THIS AGREEMENT.  IF YOU ARE ACCESSING THE SOFTWARE ELECTRONICALLY, INDICATE YOUR ACCEPTANCE OF THESE TERMS.  IF YOU DO NOT AGREE TO ALL THESE TERMS, PROMPTLY RETURN THE UNUSED SOFTWARE TO YOUR PLACE OF PURCHASE FOR A REFUND.
 

1.  CHINESE GOVERNMENT RESTRICTED.  If Software is being acquired by or on behalf of the Chinese Government , then the Government's rights in Software and accompanying documentation will be only as set forth in this Agreement.
 
2.  GOVERNING LAW.  Any action related to this Agreement will be governed by Chinese law: "COPYRIGHT LAW OF THE PEOPLE'S REPUBLIC OF CHINA","PATENT LAW OF THE PEOPLE'S REPUBLIC OF CHINA","TRADEMARK LAW OF THE PEOPLE'S REPUBLIC OF CHINA","COMPUTER SOFTWARE PROTECTION REGULATIONS OF THE PEOPLE'S REPUBLIC OF CHINA".  No choice of law rules of any jurisdiction will apply."
 

*********************************************************************************
Do you accept the above licence agreement ([Y,y]/[N,n])? y
*********************************************************************************
                     Welcome to install GBase products
*********************************************************************************
Environmental Checking on gcluster nodes.
CoordinateHost:
192.168.142.12
DataHost:
192.168.142.12
Are you sure to install GCluster on these nodes ([Y,y]/[N,n])? y
192.168.142.12       	Start install on host 192.168.142.12
192.168.142.11       	Start install on host 192.168.142.11
192.168.142.10       	Start install on host 192.168.142.10
192.168.142.12       	mkdir /opt_prepare on host 192.168.142.12.
192.168.142.11       	mkdir /opt_prepare on host 192.168.142.11.
192.168.142.10       	mkdir /opt_prepare on host 192.168.142.10.
192.168.142.12       	Copying InstallTar.py to host 192.168.142.12:/opt_prepare
192.168.142.11       	Copying InstallTar.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying InstallTar.py to host 192.168.142.10:/opt_prepare
192.168.142.12       	Copying InstallFuns.py to host 192.168.142.12:/opt_prepare
192.168.142.11       	Copying InstallFuns.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying InstallFuns.py to host 192.168.142.10:/opt_prepare
192.168.142.12       	Copying rmt.py to host 192.168.142.12:/opt_prepare
192.168.142.11       	Copying rmt.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying rmt.py to host 192.168.142.10:/opt_prepare
192.168.142.12       	Copying SSHThread.py to host 192.168.142.12:/opt_prepare
192.168.142.11       	Copying SSHThread.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying SSHThread.py to host 192.168.142.10:/opt_prepare
192.168.142.12       	Copying RestoreLocal.py to host 192.168.142.12:/opt_prepare
192.168.142.11       	Copying RestoreLocal.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying RestoreLocal.py to host 192.168.142.10:/opt_prepare
192.168.142.12       	Copying pexpect.py to host 192.168.142.12:/opt_prepare
192.168.142.11       	Copying pexpect.py to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying pexpect.py to host 192.168.142.10:/opt_prepare
192.168.142.12       	Copying BUILDINFO to host 192.168.142.12:/opt_prepare
192.168.142.11       	Copying BUILDINFO to host 192.168.142.11:/opt_prepare
192.168.142.10       	Copying BUILDINFO to host 192.168.142.10:/opt_prepare
192.168.142.12       	Copying bundle.tar.bz2 to host 192.168.142.12:/opt_prepare
192.168.142.11       	Updating corosync configure files.
192.168.142.10       	Updating corosync configure files.
192.168.142.12       	Copying bundle.tar.bz2 to host 192.168.142.12:/opt_prepare
192.168.142.11       	Updating corosync configure files.
192.168.142.10       	Updating corosync configure files.
192.168.142.12       	Copying bundle_data.tar.bz2 to host 192.168.142.12:/opt_prepare
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Installing gcluster.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Install gcluster on host 192.168.142.12 successfully.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
192.168.142.12       	Install gcluster on host 192.168.142.12 successfully.
192.168.142.11       	Install gcluster on host 192.168.142.11 successfully.
192.168.142.10       	Install gcluster on host 192.168.142.10 successfully.
Update and sync configuration file...
Starting all gcluster nodes...
Sync coordinator system tables...
check database password ...

这一直卡住,没有显示报错,看后台安装日志没有报错,在尝试登录数据库。

[root@localhost ~]# tail -f /opt/pkg/gcinstall/gcinstall.log 
2022-08-11 17:28:52,297-root-DEBUG rm -f /opt/pkg/gcinstall/corosync.conf192.168.142.12
2022-08-11 17:28:52,812-root-INFO sync corosync conf successfully.
2022-08-11 17:28:52,812-root-DEBUG Starting all gcluster nodes...
2022-08-11 17:28:59,753-root-INFO start service successfull on host 192.168.142.12.
2022-08-11 17:29:08,441-root-INFO start service successfull on host 192.168.142.10.
2022-08-11 17:29:10,182-root-INFO start service successfull on host 192.168.142.11.
2022-08-11 17:29:10,686-root-DEBUG /bin/chown -R gbase:gbase gcChangeInfo.xml
2022-08-11 17:29:10,730-root-DEBUG Sync coordinator system tables...
2022-08-11 17:29:10,730-root-INFO check database password ...
2022-08-11 17:29:10,730-root-INFO gccli -uroot -p'***' -e'use gbase'

查看/opt/gcluster/log/gcluster/express.log 日志,提示调用gcClmClusterTrack函数错误。

[root@localhost ~]# tail -f /opt/gcluster/log/gcluster/express.log 
2022-08-11 17:51:43.319 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail

2022-08-11 17:51:44.322 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail

2022-08-11 17:51:45.325 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail

2022-08-11 17:51:46.330 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail

2022-08-11 17:51:47.333 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail

2022-08-11 17:51:48.335 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail

2022-08-11 17:51:49.339 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail

2022-08-11 17:51:50.341 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail

2022-08-11 17:51:51.346 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail
  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值