GBase学习-安装GBase 8a MPP Cluster V95

一、环境准备

操作系统:CentOS7.6
服务器:3台(VMware虚拟机)
配置:内存3G,磁盘50G,单网卡
IP信息:
主节点(管理节点、数据节点):192.168.172.125
从节点(管理节点、数据节点):192.168.172.126
从节点(数据节点):192.168.172.127

系统要求:

1.操作系统建议为Redhat7.x或CentOS7.x
2.关闭防火墙和selinux。(防火墙可以开启并配置相关端口)
3.三台服务器相同网段,固定ip地址。并且可以ssh互联,建议在安装前使用ssh命令分别测试连接一次。

GBase 8a安装包:
GBase8a_MPP_Cluster-License-9.5.2.39-redhat7.3-x86_64.tar.bz2

二、安装前准备

1.创建管理gbase的DBA用户,三台服务器上都要创建。
useradd gbase
passwd gbase

本次练习设置的gbase操作系统用户的密码为:gbase

2.在三台服务器节点上创建安装目录。
mkdir -p /opt/gbase
chown gbase:gbase /opt/gbase
chown gbase:gbase /tmp
注意:

1.目录/tmp是为了存放临时gbase相关文件,更改属主属组为了操作系统用户gbase有读写权限。
2.安装目录也可以自定义目录,如果服务器大磁盘目录为 /data,可以参考:

mkdir -p /data/gbase
chown -R gbase:gbase /data/gbase
3.上传并解压安装包。

使用Xftp工具(或其他可以上传文件到Linux服务器的工具)将GBase 8a的安装包只需要上传到主节点192.168.172.125服务器的目录/opt下即可,并使用root用户解压。

[root@local125 opt]# cd /opt
[root@local125 opt]# tar xfj GBase8a_MPP_Cluster-License-9.5.2.39-redhat7.3-x86_64.tar.bz2
[root@local125 opt]# ll -h
total 134M
drwxr-xr-x 5 gbase gbase   49 Jul 20 19:04 gbase
-rw-r--r-- 1 root  root  134M Jul 20 18:48 GBase8a_MPP_Cluster-License-9.5.2.39-redhat7.3-x86_64.tar.bz2
drwxrwxr-x 2 gbase gbase 4.0K Jul 20 19:06 gcinstall

解压缩完成后,opt 下能看到生成 gcinstall 安装目录。

4.设置环境变量

在解压后的gcinstall目录下,将文件SetSysEnv.py远程传给从节点126、127服务器的/opt目录下。

[root@local125 opt]# scp gcinstall/SetSysEnv.py 192.168.172.126:/opt
root@192.168.172.126's password: 
SetSysEnv.py                                                                               100%   27KB   7.7MB/s   00:00    
[root@local125 opt]# scp gcinstall/SetSysEnv.py 192.168.172.127:/opt
root@192.168.172.127's password: 
SetSysEnv.py                                                                               100%   27KB   5.7MB/s   00:00    

运行SetSysEnv.py脚本配置安装环境。

主节点执行:
[root@local125 opt]# cd gcinstall/
[root@local125 gcinstall]# ls
BUILDINFO            CorosyncConf.py  fulltext.py              gcwareGroup.json  license.txt      rmt.py
bundle_data.tar.bz2  demo.options     gbase_data_timezone.sql  gethostsid        pexpect.py       rootPwd.json
bundle.tar.bz2       dependRpms       gccopy.py                importLicense     replace.py       SetSysEnv.py
CGConfigChecker.py   example.xml      gcexec.py                InstallFuns.py    replaceStop.py   SSHThread.py
chkLicense           extendCfg.xml    gcgenfinger              InstallTar.py     RestoreLocal.py  unInstall_fulltext.py
cluster.conf         FileCheck.py     gcinstall.py             License           Restore.py       unInstall.py
[root@local125 gcinstall]# python SetSysEnv.py --dbaUser=gbase --installPrefix=/opt/gbase --cgroup
Fail to modify system parameters. Reason: Fail to exec echo "`grep MemTotal /proc/meminfo | awk '{printf "%.0f",$2}'`/10" | bc, reason: /bin/sh: bc: command not found
. Please refer to /tmp/SetSysEnv.log
[root@local125 gcinstall]# cat /tmp/SetSysEnv.log
2022-07-20 18:53:00,711-root-INFO Start modification of system env.
2022-07-20 18:53:00,712-root-INFO Set kernal parameters...
2022-07-20 18:53:00,720-root-INFO ['kernel.core_uses_pid = 1\n', 'net.core.netdev_max_backlog = 262144\n', 'net.core.rmem_default = 8388608\n', 'net.core.rmem_max = 16777216\n', 'net.core.somaxconn = 32767\n', 'net.core.wmem_default = 8388608\n', 'net.core.wmem_max = 16777216\n', 'net.ipv4.tcp_max_syn_backlog = 262144\n', 'net.ipv4.tcp_rmem = 4096 87380 4194304\n', 'net.ipv4.tcp_sack = 1\n', 'net.ipv4.ip_local_reserved_ports = 5050,5258,5288,6666,6268\n', 'net.ipv4.tcp_syncookies = 1\n', 'net.ipv4.tcp_window_scaling = 1\n', 'net.ipv4.tcp_wmem = 4096 16384 4194304\n', 'vm.vfs_cache_pressure = 1024\n', 'vm.swappiness = 1\n', 'vm.overcommit_memory = 0\n', 'vm.zone_reclaim_mode = 0\n']
2022-07-20 18:53:00,721-root-INFO /sbin/sysctl -p
2022-07-20 18:53:00,732-root-INFO Set kernal parameters to end.
2022-07-20 18:53:00,732-root-INFO Set system open files...
2022-07-20 18:53:00,732-root-INFO reference: ulimit -n, value: 655360
2022-07-20 18:53:00,732-root-INFO exec cmd:[ `ulimit -n` == 655360 ]
2022-07-20 18:53:00,745-root-INFO sed -i.bck -e 's/^\(\*\s*\)\(soft\|hard\)\(\s\+nofile\)/#Commented out by gcluster\n#\1\2\3/g' /etc/security/limits.conf
2022-07-20 18:53:00,752-root-INFO sed -i.bck -e 's/^\(gbase\s*\)\(soft\|hard\)\(\s\+nofile\)/#Commented out by gcluster\n#\1\2\3/g' /etc/security/limits.conf 
2022-07-20 18:53:00,758-root-INFO sed -i.bck -e 's/^\(\*\s*\)\(soft\|hard\)\(\s\+nproc\)/#Commented out by gcluster\n#\1\2\3/g' /etc/security/limits.conf 
2022-07-20 18:53:00,766-root-INFO sed -i.bck -e 's/^\(gbase\s*\)\(soft\|hard\)\(\s\+nproc\)/#Commented out by gcluster\n#\1\2\3/g' /etc/security/limits.conf 
2022-07-20 18:53:00,772-root-INFO sed -i.bck -e 's/^\(\*\s*\)\(soft\|hard\)\(\s\+sigpending\)/#Commented out by gcluster\n#\1\2\3/g' /etc/security/limits.conf 
2022-07-20 18:53:00,792-root-INFO sed -i.bck -e 's/^\(gbase\s*\)\(soft\|hard\)\(\s\+sigpending\)/#Commented out by gcluster\n#\1\2\3/g' /etc/security/limits.conf 
2022-07-20 18:53:00,807-root-INFO sed -i -e 's/^# End of file/#/g' /etc/security/limits.conf 
2022-07-20 18:53:00,819-root-INFO /bin/bash -c "echo -e '# Added by gcluster\ngbase\tsoft\tnofile\t655360' >>/etc/security/limits.conf"
2022-07-20 18:53:00,830-root-INFO /bin/bash -c "echo -e '# Added by gcluster\ngbase\thard\tnofile\t655360' >>/etc/security/limits.conf"
2022-07-20 18:53:00,837-root-INFO /bin/bash -c "echo -e '# End of file' >>/etc/security/limits.conf"
2022-07-20 18:53:00,863-root-INFO /bin/bash -c "if [ -f /etc/pam.d/su ]; then if [ `egrep '^[[:space:]]*session[[:space:]]+required[[:space:]]+pam_limits\.so' /etc/pam.d/su |                wc -l` -eq 0 ]; then echo 'session required pam_limits.so' >> /etc/pam.d/su;  fi fi"
2022-07-20 18:53:00,882-root-INFO sed -i "s/\(^\*\s*soft\s*nproc.*\)/#\1/g" /etc/security/limits.d/*-nproc.conf
2022-07-20 18:53:00,882-root-INFO Set system open files to end.
2022-07-20 18:53:00,882-root-INFO Set system file size...
2022-07-20 18:53:00,882-root-INFO reference: ulimit -f, value: unlimited
2022-07-20 18:53:00,882-root-INFO exec cmd:[ `ulimit -f` == "unlimited" ]
2022-07-20 18:53:00,889-root-INFO exec cmd:[ `ulimit -f` == "unlimited" ], return value: unlimited
2022-07-20 18:53:00,890-root-INFO Set system file size to end.
2022-07-20 18:53:00,890-root-INFO Set system kernal paramter 'file max'...
2022-07-20 18:53:00,898-root-INFO Set system kernal paramter 'file max' to end.
2022-07-20 18:53:00,898-root-INFO Set system kernal paramter 'max_map_count'...
2022-07-20 18:53:00,922-root-INFO /sbin/sysctl -w vm.max_map_count=180965
2022-07-20 18:53:00,926-root-INFO sed -i.bck -e 's/^\(\s*vm\.max_map_count\)/#Commented out by gcluster\n#\1/g' /etc/sysctl.conf 
2022-07-20 18:53:00,930-root-INFO /bin/bash -c "echo '# Added by gcluster' >> /etc/sysctl.conf"
2022-07-20 18:53:00,934-root-INFO /bin/bash -c "echo 'vm.max_map_count = 180965' >> /etc/sysctl.conf"
2022-07-20 18:53:00,934-root-INFO Set system kernal paramter 'max_map_count' to end.
2022-07-20 18:53:00,934-root-INFO Set system kernal paramter 'min_free_kbytes'...
2022-07-20 18:53:00,942-root-ERROR 
2022-07-20 18:53:00,942-root-ERROR /bin/sh: bc: command not found

2022-07-20 18:53:00,942-root-ERROR Fail to exec echo "`grep MemTotal /proc/meminfo | awk '{printf "%.0f",$2}'`/10" | bc, reason: /bin/sh: bc: command not found

由于三台服务器安装为最小化安装,执行脚本时提示没有bc命令,因此需要在三台服务器上配置本地yum源安装依赖包。
本地配置yum源参考:本地yum源配置
三台服务器都要配置yum源,然后安装bc依赖包:

主节点执行。从节点参考执行
[root@local125 gcinstall]# yum -y install bc
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package bc.x86_64 0:1.06.95-13.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================
 Package                 Arch                        Version                             Repository                     Size
=============================================================================================================================
Installing:
 bc                      x86_64                      1.06.95-13.el7                      c7-media                      115 k

Transaction Summary
=============================================================================================================================
Install  1 Package

Total download size: 115 k
Installed size: 215 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : bc-1.06.95-13.el7.x86_64                                                                                  1/1 
  Verifying  : bc-1.06.95-13.el7.x86_64                                                                                  1/1 

Installed:
  bc.x86_64 0:1.06.95-13.el7                                                                                                 

Complete!

三台服务器安装依赖包后从新执行环境变量配置的操作命令。

[root@local125 gcinstall]# python SetSysEnv.py --dbaUser=gbase --installPrefix=/opt/gbase --cgroup
[root@local126 opt]# python SetSysEnv.py --dbaUser=gbase --installPrefix=/opt/gbase --cgroup
[root@local127 opt]# python SetSysEnv.py --dbaUser=gbase --installPrefix=/opt/gbase --cgroup

本次执行没有出现报错。

三、开始安装

1.安装操作只要在主节点125服务器上执行。使用gbase的DBA操作系统用户gbase进行安装。
[root@local125 gcinstall]# su - gbase
Last login: Wed Jul 20 18:57:30 CST 2022 on pts/0
[gbase@local125 ~]$ cd /opt/gcinstall/
[gbase@local125 gcinstall]$ vim demo.options 

修改后的demo.options文件如下:

[gbase@local125 gcinstall]$ cat demo.options 
installPrefix= /opt/gbase
coordinateHost = 192.168.172.125,192.168.172.126
coordinateHostNodeID = 234,235,237
dataHost = 192.168.172.125,192.168.172.126,192.168.172.127
#existCoordinateHost =
#existDataHost =
dbaUser = gbase
dbaGroup = gbase
dbaPwd = 'gbase'
rootPwd = 'zhangyahui'
#rootPwdFile = rootPwd.json

说明:
installPrefix:软件安装目录
coordinateHost:指定安装管理节点的服务器ip,本次指定125、126位管理节点。
coordinateHostNodeID:管理节点的标识id,一般设置为ip最后位数字。
dataHost:指定安装的数据节点服务器ip,本次三台全部安装数据节点。
dbaUser、dbaGroup:DBA操作系统管理用户的属主和属组,官方安装为gbase。
dbaPwd:DBA操作系统管理用户的密码。
rootPwd:root密码。明文形式
#rootPwdFile:(已注释)root密码存储文件,一般不像用rootPwd明文,用以用文件形式读取root密码。本次明文指定root密码。

2.执行安装

注意要在安装包解压的gcinstall目录下执行才可以。

[root@local125 ~]# su - gbase
[gbase@local125 ~]$ cd /opt/gcinstall
[gbase@local125 gcinstall]$ ./gcinstall.py --silent=demo.options 
*********************************************************************************
Thank you for choosing GBase product!


Please read carefully the following licencing agreement before installing GBase product:
TIANJIN GENERAL DATA TECHNOLOGY CO., LTD. LICENSE AGREEMENT
 
READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED SUPPLEMENTAL LICENSETERMS (COLLECTIVELY "AGREEMENT") CAREFULLY BEFORE OPENING THE SOFTWAREMEDIA PACKAGE.  BY OPENING THE SOFTWARE MEDIA PACKAGE, YOU AGREE TO THE TERMS OF THIS AGREEMENT.  IF YOU ARE ACCESSING THE SOFTWARE ELECTRONICALLY, INDICATE YOUR ACCEPTANCE OF THESE TERMS.  IF YOU DO NOT AGREE TO ALL THESE TERMS, PROMPTLY RETURN THE UNUSED SOFTWARE TO YOUR PLACE OF PURCHASE FOR A REFUND.
 

1.  LICENSE TO USE.  GeneralData grants you a non-exclusive and non-transferable license for the internal use only of the accompanying software and documentation and any error corrections provided by GeneralData(collectively "Software"), by the number of users and the class of computer hardware for which the corresponding fee has been paid.
 
2.  RESTRICTIONS.  Software is confidential and copyrighted. Title to Software and all associated intellectual property rights is retained by GeneralData and/or its licensors.  Except as specifically authorized in any Supplemental License Terms, you may not make copies of Software, other than a single copy of Software for archival purposes.  Unless enforcement is prohibited by applicable law, you may not modify,decompile, or reverse engineer Software.  You acknowledge that Software is not designed, licensed or intended for use in the design,construction, operation or maintenance of any nuclear facility.  GeneralData disclaims any express or implied warranty of fitness for such uses.No right, title or interest in or to any trademark, service mark, logo or trade name of GeneralData or its licensors is granted under this Agreement.
 
3.  DISCLAIMER OF WARRANTY.  Unless specified in this agreement, all express of implied conditions, representations and warranties, including any implied warranty of merchantability, fitness for aparticular purpose or non-infringement are disclaimed, except to theextent that these disclaimers are held to be legally invalid.
 
4.  LIMITATION OF LIABILITY.  To the extent not prohibited by law, in no event will GeneralData or its licensors be liable for any lost revenue, profit or data, or for special, indirect, consequential,incidental orpunitive damages, however caused regardless of the theory of liability, arising out of or related to the use of or inability to use software, even if GeneralData has been advised of the possibility of such damages.In no event will GeneralData's libility to you, whether incontract, tort (including negligence), or otherwise, exceed the amount paid by you for Software under this Agreement.  The foregoing limitations will apply even if the above stated warranty fails of itsessential purpose.
 
5.  TERMINATION.  This Agreement is effective until terminated.  You may terminate this Agreement at any time by destroying all copies of Software.  This Agreement will terminate immediately without noticefrom GeneralData if you fail to comply with any provision of this Agreement.Upon Termination, you must destroy all copies of Software.
 
6.  EXPORT REGULATIONS.  All Software and technical data delivered under this Agreement are subject to US export control laws and may be subject to export or import regulations in other countries.  You agree to comply strictly with all such laws and regulations and acknowledge that you have the responsibility to obtain such licenses to export,re-export, or import as may be required after delivery to you.
 
7.  CHINESE GOVERNMENT RESTRICTED.  If Software is being acquired by or on behalf of the Chinese Government , then the Government's rights in Software and accompanying documentation will be only as set forth in this Agreement.
 
8.  GOVERNING LAW.  Any action related to this Agreement will be governed by Chinese law: "COPYRIGHT LAW OF THE PEOPLE'S REPUBLIC OF CHINA","PATENT LAW OF THE PEOPLE'S REPUBLIC OF CHINA","TRADEMARK LAW OF THE PEOPLE'S REPUBLIC OF CHINA","COMPUTER SOFTWARE PROTECTION REGULATIONS OF THE PEOPLE'S REPUBLIC OF CHINA".  No choice of law rules of any jurisdiction will apply."
 
9.  SEVERABILITY.  If any provision of this Agreement is held to be unenforceable, this Agreement will remain in effect with the provision omitted, unless omission would frustrate the intent of the parties, inwhich case this Agreement will immediately terminate.
 
10. INTEGRATION.  This Agreement is the entire agreement between you and GeneralData relating to its subject matter.  It supersedes all prior or contemporaneous oral or written communications, proposals,representations and warranties and prevails over any conflicting or additional terms of any quote, order, acknowledgment, or other communication between the parties relating to its subject matter during the term of this Agreement.  No modification of this Agreement will be binding, unless in writing and signed by an authorize depresentative of each party. When the translation document has the different meaning or has the conflicting views with Chinese original text conflict, should take the laws and regulations promulgation unit as well as the Generaldata issue Chinese original text as the standard. 

*********************************************************************************
Do you accept the above licence agreement ([Y,y]/[N,n])? Y
*********************************************************************************
                     Welcome to install GBase products
*********************************************************************************
Environmental Checking on gcluster nodes.
Cgconfig service is not exist on host ['192.168.172.125', '192.168.172.126', '192.168.172.127'], resource manangement can not be used, continue ([Y,y]/[N,n])? Y

安装时出现两次需要输入的提示,第一个提示是是否接受以上许可协议,直接输入“Y”即可。第二个提示官方说明是如果操作系统没cgroup(资源管理)组件,会出现如下警告,输入 Y 回车即可
而且官方说明在安装过程中,先进行环境检查,可能会有错,列出缺少rpm依赖包名称,说明操作系统没有安装全必须的rpm包,需要根据rpm包的名称去各节点逐个安装。
8a需要的必备依赖包列表,请查看安装目录gcinstall下的 dependRpms 文件,文件内容如下

[gbase@local125 gcinstall]$ cat dependRpms 
pcre
krb5-libs
libdb
glibc
keyutils-libs
libidn
libuuid
ncurses-libs
libgpg-error
libgomp
libstdc++
libcom_err
libgcc
python-libs
libselinux
libgcrypt
nss-softokn-freebl

按照所需依赖包,在三台服务器上执行安装命令:
主节点执行,从节点参考执行:

[root@local125 ~]# yum -y install pcre krb5-libs libdb glibc keyutils-libs libidn libuuid ncurses-libs libgpg-error libgomp libstdc++ libcom_err libgcc python-libs libselinux libgcrypt nss-softokn-freebl

安装完依赖包后再次执行,后续输出如下:

CoordinateHost:
192.168.172.125    192.168.172.126
DataHost:
192.168.172.125    192.168.172.126    192.168.172.127
Are you sure to install GCluster on these nodes ([Y,y]/[N,n])? Y
192.168.172.127      	start install on host 192.168.172.127
192.168.172.126      	start install on host 192.168.172.126
192.168.172.125      	start install on host 192.168.172.125
192.168.172.127      	mkdir /opt/gbase/cluster_prepare on host 192.168.172.127.
192.168.172.126      	mkdir /opt/gbase/cluster_prepare on host 192.168.172.126.
192.168.172.125      	start install on host 192.168.172.125
192.168.172.127      	mkdir /opt/gbase/cluster_prepare on host 192.168.172.127.
192.168.172.126      	mkdir /opt/gbase/cluster_prepare on host 192.168.172.126.
192.168.172.125      	mkdir /opt/gbase/cluster_prepare on host 192.168.172.125.
192.168.172.127      	Copying InstallTar.py to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying InstallTar.py to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying InstallTar.py to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying InstallFuns.py to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying InstallFuns.py to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying InstallFuns.py to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying InstallFuns.py to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying InstallFuns.py to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying InstallFuns.py to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying rmt.py to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying rmt.py to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying rmt.py to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying SSHThread.py to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying SSHThread.py to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying SSHThread.py to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying RestoreLocal.py to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying SSHThread.py to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying RestoreLocal.py to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying RestoreLocal.py to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying RestoreLocal.py to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying RestoreLocal.py to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying pexpect.py to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying pexpect.py to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying pexpect.py to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying pexpect.py to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying pexpect.py to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying pexpect.py to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying BUILDINFO to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying BUILDINFO to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying BUILDINFO to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying BUILDINFO to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle_data.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle_data.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle_data.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle_data.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle_data.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Copying bundle_data.tar.bz2 to host 192.168.172.127:/opt/gbase/cluster_prepare
192.168.172.126      	Copying bundle_data.tar.bz2 to host 192.168.172.126:/opt/gbase/cluster_prepare
192.168.172.125      	Copying bundle.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Installing gcluster.
192.168.172.126      	Installing gcluster.
192.168.172.125      	Copying bundle_data.tar.bz2 to host 192.168.172.125:/opt/gbase/cluster_prepare
192.168.172.127      	Installing gcluster.
192.168.172.126      	Installing gcluster.
192.168.172.125      	Installing gcluster.
。。。。。。省略。。。。。。
192.168.172.125      	Installing gcluster.
192.168.172.127      	install cluster on host 192.168.172.127 successfully.
192.168.172.126      	Installing gcluster.
192.168.172.127      	install cluster on host 192.168.172.127 successfully.
192.168.172.126      	Installing gcluster.
192.168.172.125      	install cluster on host 192.168.172.125 successfully.
192.168.172.127      	install cluster on host 192.168.172.127 successfully.
192.168.172.126      	install cluster on host 192.168.172.126 successfully.
192.168.172.125      	install cluster on host 192.168.172.125 successfully.
Starting all gcluster nodes...
start service failed on host 192.168.172.127.
start service failed on host 192.168.172.125.
start service failed on host 192.168.172.126.
adding new datanodes to gcware...
InstallCluster Successfully.

集群软件安装完成,因为没有license授权,所以最后在启动集群节点的服务会失败,

3.查看集群状态

执行查看集群状态命令是有如下提示:

[gbase@local125 gcinstall]$ gcadmin
-bash: gcadmin: command not found

这是因为当前安装后的环境变量没有生效,需要退出回到root用户在重新切换到gbase用户才能生效:

[root@local125 ~]# su - gbase
Last login: Wed Jul 20 19:14:47 CST 2022 on pts/0
[gbase@local125 ~]$ gcadmin
CLUSTER STATE:         ACTIVE

==================================================================
|             GBASE COORDINATOR CLUSTER INFORMATION              |
==================================================================
|   NodeName   |    IpAddress    | gcware | gcluster | DataState |
------------------------------------------------------------------
| coordinator1 | 192.168.172.125 |  OPEN  |  CLOSE   |     0     |
------------------------------------------------------------------
| coordinator2 | 192.168.172.126 |  OPEN  |  CLOSE   |     0     |
------------------------------------------------------------------
================================================================
|           GBASE CLUSTER FREE DATA NODE INFORMATION           |
================================================================
| NodeName  |    IpAddress    | gnode | syncserver | DataState |
----------------------------------------------------------------
| FreeNode1 | 192.168.172.125 | CLOSE |    OPEN    |     0     |
----------------------------------------------------------------
| FreeNode2 | 192.168.172.126 | CLOSE |    OPEN    |     0     |
----------------------------------------------------------------
| FreeNode3 | 192.168.172.127 | CLOSE |    OPEN    |     0     |
----------------------------------------------------------------

0 virtual cluster
2 coordinator node
3 free data node

由于没有授权所有集群服务也是关闭状态。需要申请license并导入后重启集群服务才可以恢复正常。

3.获取license授权。

生成服务器相关的“指纹”信息。执行命令的脚本位置安装包解压后的目录下。使用DBA管理用户进行操作。

[gbase@local125 ~]$ cd /opt/gcinstall/
[gbase@local125 gcinstall]$ ls
192.168.172.125.options  cluster.conf     gbase_data_timezone.sql  GetOSType.py     replace.py       SSHThread.pyc
192.168.172.126.options  CorosyncConf.py  gcChangeInfo.xml         importLicense    replaceStop.py   unInstall_fulltext.py
192.168.172.127.options  demo.options     gccopy.py                InstallFuns.py   RestoreLocal.py  unInstall.py
BUILDINFO                dependRpms       gcexec.py                InstallFuns.pyc  Restore.py       unInstall.pyc
bundle_data.tar.bz2      example.xml      gcgenfinger              InstallTar.py    rmt.py           vclink.xml
bundle.tar.bz2           extendCfg.xml    gcinstall.log            License          rmt.pyc
CGConfigChecker.py       FileCheck.py     gcinstall.py             license.txt      rootPwd.json
CGConfigChecker.pyc      FileCheck.pyc    gcwareGroup.json         pexpect.py       SetSysEnv.py
chkLicense               fulltext.py      gethostsid               pexpect.pyc      SSHThread.py
[gbase@local125 gcinstall]$ ./License -n 192.168.172.125,192.168.172.126,192.168.172.127 -f /tmp/20220721-02.lic -u gbase -p gbase
======================================================================
Successful node nums:	3
======================================================================

查看生成的文件内容:

[gbase@local125 gcinstall]$ cat /tmp/finger.txt 
{"HWADDR":"00:50:56:3A:93:BC","SOCKETS":1,"ARCHITECTURE":"x86_64","BYTE ORDER":"Little Endian","MODEL":"61","THREADS":2,"CPUS":2,"NNNODES":1,"CONFUSE DATA":"x3.k5,MOAW;~,$$"}
{"HWADDR":"00:50:56:2F:B5:AC","SOCKETS":1,"ARCHITECTURE":"x86_64","BYTE ORDER":"Little Endian","MODEL":"61","THREADS":2,"CPUS":2,"NNNODES":1,"CONFUSE DATA":"x3.k5,MOAW;~,$$"}
{"HWADDR":"00:50:56:24:1A:88","SOCKETS":1,"ARCHITECTURE":"x86_64","BYTE ORDER":"Little Endian","MODEL":"61","THREADS":2,"CPUS":2,"NNNODES":1,"CONFUSE DATA":"%rff%C:a&-tosp%"}

将生成的服务器“指纹”文件通过邮件发送给官方获取相关授权的license。
官方邮件:license@gbase.cn;
并且抄送给 shenliping@gbase.cn,;
附件为指纹信息文件finger.txt。
邮件标题:GBase 8a MPP Cluster v95 license 申请
邮件正文:
客户名称: 您的单位全称
项目名称: 2022年X月GBase 8a MPP Cluster GDCA认证培训
申请人: 您的姓名
申请原因: 培训实操练习
有效期: 3个月
操作系统名称及版本: CentOS Linux release 7.6.1810 (Core)
8a集群版本: GBase8a_MPP_Cluster-License-9.5.2.39-redhat7.3-x86_64.tar.bz2

其中操作系统名称及版本可通过cat /etc/redhat-release获得。

4.导入授权文件

官方回复邮件后收到license文件:20220721-02.lic
将文件上传到主节点125服务器/tmp目录下,切换DBA管理用户gbase执行导入命令:

[root@local125 ~]# su - gbase
[gbase@local125 ~]$ cd /opt/gcinstall/
[gbase@local125 gcinstall]$ ./License -n 192.168.172.125,192.168.172.126,192.168.172.127 -f /tmp/20220721-02.lic -u gbase -p gbase
======================================================================
Successful node nums:	3
======================================================================

导入成功后可以执行检查命令查看授权是否生效:

[gbase@local125 gcinstall]$ ./chkLicense -n 192.168.172.125,192.168.172.126,192.168.172.127  -u gbase -p gbase
======================================================================
192.168.172.127
is_exist:yes
version:trial
expire_time:20221021
is_valid:yes
======================================================================
192.168.172.126
is_exist:yes
version:trial
expire_time:20221021
is_valid:yes
======================================================================
192.168.172.125
is_exist:yes
version:trial
expire_time:20221021
is_valid:yes
5.重启所有节点的集群服务

三个节点全部执行重启集群服务命令。

125主节点
[gbase@local125 gcinstall]$ gcluster_services all start
Starting gcware :                                          [  OK  ]
Starting gcluster :                                        [  OK  ]
Starting gcrecover :                                       [  OK  ]
Starting gbase :                                           [  OK  ]
Starting syncserver :                                      [  OK  ]
Starting GCMonit success!
126从节点
[gbase@local126 ~]$ gcluster_services all start
Starting gcware :                                          [  OK  ]
Starting gcluster :                                        [  OK  ]
Starting gcrecover :                                       [  OK  ]
Starting gbase :                                           [  OK  ]
Starting syncserver :                                      [  OK  ]
Starting GCMonit success!

127从节点
[gbase@local127 ~]$ gcluster_services all start
Starting gbase :                                           [  OK  ]
Starting syncserver :                                      [  OK  ]
Starting GCMonit success!

因为125和126是管理节点和数据节点,所以跟127单纯数据节点启动有所不同。

查看集群状态:

[gbase@local125 gcinstall]$ gcadmin
CLUSTER STATE:         ACTIVE

==================================================================
|             GBASE COORDINATOR CLUSTER INFORMATION              |
==================================================================
|   NodeName   |    IpAddress    | gcware | gcluster | DataState |
------------------------------------------------------------------
| coordinator1 | 192.168.172.125 |  OPEN  |   OPEN   |     0     |
------------------------------------------------------------------
| coordinator2 | 192.168.172.126 |  OPEN  |   OPEN   |     0     |
------------------------------------------------------------------
================================================================
|           GBASE CLUSTER FREE DATA NODE INFORMATION           |
================================================================
| NodeName  |    IpAddress    | gnode | syncserver | DataState |
----------------------------------------------------------------
| FreeNode1 | 192.168.172.125 | OPEN  |    OPEN    |     0     |
----------------------------------------------------------------
| FreeNode2 | 192.168.172.126 | OPEN  |    OPEN    |     0     |
----------------------------------------------------------------
| FreeNode3 | 192.168.172.127 | OPEN  |    OPEN    |     0     |
----------------------------------------------------------------

0 virtual cluster
2 coordinator node
3 free data node

注:查询集群状态只有管理节点才有查询命令。

四、设置分片并初始化数据库

1.设置分片。

主节点125用DBA管理用户gbase执行。需要调用安装包解压后目录下的gcChangeInfo.xml文件,此文件是安装过程中自动生成的,里面记录各个数据节点信息。

[gbase@local125 gcinstall]$ cat gcChangeInfo.xml 
<?xml version="1.0" encoding="utf-8"?>
<servers>
 <rack>
  <node ip="192.168.172.125"/>
  <node ip="192.168.172.126"/>
  <node ip="192.168.172.127"/>
 </rack>
</servers>

执行分片设置命令:

[gbase@local125 ~]$ cd /opt/gcinstall/
[gbase@local125 gcinstall]$ gcadmin distribution gcChangeInfo.xml p 2 d 1 pattern 1
gcadmin generate distribution ...

NOTE: node [192.168.172.125] is coordinator node, it shall be data node too
NOTE: node [192.168.172.126] is coordinator node, it shall be data node too
gcadmin generate distribution successful

gcChangeInfo.xml:是描述集群内节点和rack(机柜)对应关系的文件,默认存放于gcinstall目录。
p:每个数据节点存放的主分片数量。注:pattern 1模式下,p的取值范围为:1<=p<rack内节点数。
d:每个主分片的备份数量,取值为0,1 或2。默认值为1

分片配置完成后查询集群状态如下:

[gbase@local125 gcinstall]$ gcadmin
CLUSTER STATE:         ACTIVE
VIRTUAL CLUSTER MODE:  NORMAL

==================================================================
|             GBASE COORDINATOR CLUSTER INFORMATION              |
==================================================================
|   NodeName   |    IpAddress    | gcware | gcluster | DataState |
------------------------------------------------------------------
| coordinator1 | 192.168.172.125 |  OPEN  |   OPEN   |     0     |
------------------------------------------------------------------
| coordinator2 | 192.168.172.126 |  OPEN  |   OPEN   |     0     |
------------------------------------------------------------------
=========================================================================================================
|                                    GBASE DATA CLUSTER INFORMATION                                     |
=========================================================================================================
| NodeName |                IpAddress                 | DistributionId | gnode | syncserver | DataState |
---------------------------------------------------------------------------------------------------------
|  node1   |             192.168.172.125              |       1        | OPEN  |    OPEN    |     0     |
---------------------------------------------------------------------------------------------------------
|  node2   |             192.168.172.126              |       1        | OPEN  |    OPEN    |     0     |
---------------------------------------------------------------------------------------------------------
|  node3   |             192.168.172.127              |       1        | OPEN  |    OPEN    |     0     |
---------------------------------------------------------------------------------------------------------

或者使用一下命令查看更详细的分片信息。

[gbase@local125 gcinstall]$ gcadmin showdistribution node
                                      Distribution ID: 1 | State: new | Total segment num: 6

====================================================================================================================================
|  nodes   |            192.168.172.125            |            192.168.172.126            |            192.168.172.127            |
------------------------------------------------------------------------------------------------------------------------------------
| primary  |                  1                    |                  2                    |                  3                    |
| segments |                  4                    |                  5                    |                  6                    |
------------------------------------------------------------------------------------------------------------------------------------
|duplicate |                  3                    |                  1                    |                  2                    |
|segments 1|                  5                    |                  6                    |                  4                    |
====================================================================================================================================
2.数据库初始化

需要登陆数据库中执行操作,使用root用户登录,默认密码为空。如果数据库不初始化是无法创建新数据库的。如下初始化前创建test库报错,初始化后正常创建。

[gbase@local125 gcinstall]$ gccli -u root -p
Enter password: 

GBase client 9.5.2.39.126761. Copyright (c) 2004-2022, GBase.  All Rights Reserved.

gbase> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| performance_schema |
| gbase              |
| gctmpdb            |
+--------------------+
4 rows in set (Elapsed: 00:00:00.00)

gbase> create database test;
ERROR 1707 (HY000): gcluster command error: (GBA-02CO-0003) nodedatamap is not initialized.
gbase> initnodedatamap;
Query OK, 0 rows affected (Elapsed: 00:00:02.93)

gbase> create database test;
Query OK, 1 row affected (Elapsed: 00:00:00.09)

gbase> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| performance_schema |
| gbase              |
| gctmpdb            |
| gclusterdb         |
| test               |
+--------------------+
6 rows in set (Elapsed: 00:00:00.00)

gbase> 

总结

安装过程中注意哪些操作是root用户下执行,哪些是在DBA管理用户gbase下执行,另外对于切换到gbase用户执行的命令需要注意gbase用户要有对涉及到的操作的目录有读写权限才可以执行成功。

  • 0
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Major_ZYH

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值