greenplum 5X 扩容节点

本文档详细记录了Greenplum 5.16.0在Red Hat Enterprise Linux Server 7.5环境下进行节点扩容的过程。首先,介绍了系统环境和现有集群状态,确认所有组件正常运行。接着,描述了系统调整,包括关闭防火墙,禁用SELinux,并创建新的逻辑卷。然后,添加新服务器地址,创建并配置用户。最后,执行扩容操作,成功将新节点加入到Greenplum集群中,并验证了各节点端口状况。
摘要由CSDN通过智能技术生成

1、环境介绍:
more /etc/redhat-release
Red Hat Enterprise Linux Server release 7.5 (Maipo)
系统安装采取最小化安装。

greenplum-db-5.16.0-rhel7-x86_64.zip

more /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.15.201  rhmdw
192.168.15.202  rhsdw1
192.168.15.203  rhsdw2


greenplum扩容过程通过生成inputfile文件,通过input初始化扩容集群,手动进行库重新分布。
本次测试环境添加两台服务器,生成inputfile文件至少需要两台服务器。

扩容服务器需安装rsync、perl软件包。拷贝gp软件需要拷贝greenplub和app两个目录,否则会导致初始化失败。

yum -y install perl.x86_64 rsync.x86_64 psmisc-22.20-15.el7.x86_64

2、
检查现有集群状态:
[gpadmin@rhmdw ~]$ gpstate -b
20190404:09:37:55:001960 gpstate:rhmdw:gpadmin-[INFO]:-Starting gpstate with args: -b
20190404:09:37:55:001960 gpstate:rhmdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44'
20190404:09:37:55:001960 gpstate:rhmdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Jan 16 2019 02:32:15'
20190404:09:37:55:001960 gpstate:rhmdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20190404:09:37:55:001960 gpstate:rhmdw:gpadmin-[INFO]:-Gathering data from segments...
..
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-Greenplum instance status summary
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Master instance                                           = Active
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Master standby                                            = No master standby configured
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total segment instance count from metadata                = 4
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Primary Segment Status
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total primary segments                                    = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total primary segment valid (at master)                   = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total primary segment failures (at master)                = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Mirror Segment Status
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total mirror segments                                     = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total mirror segment valid (at master)                    = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total mirror segment failures (at master)                 = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number mirror segments acting as primary segments   = 0
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-   Total number mirror segments acting as mirror segments    = 2
20190404:09:37:57:001960 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------


[gpadmin@rhmdw ~]$ psql -d postgres
psql (8.3.23)
Type "help" for help.

postgres=# select a.dbid,a.content,a.role,a.port,a.hostname,b.fsname,c.fselocation from gp_segment_configuration a,pg_filespace b,pg_filespace_entry c where a.dbid=c.fsedbid and b.oid=c.fsefsoid order by content;
 dbid | content | role | port | hostname |  fsname   |        fselocation        
------+---------+------+------+----------+-----------+---------------------------
    1 |      -1 | p    | 5432 | rhmdw    | pg_system | /gp/gpdata/master/gpseg-1
    4 |       0 | m    | 7000 | rhsdw2   | pg_system | /gp/gpdata/mirror/gpseg0
    2 |       0 | p    | 6000 | rhsdw1   | pg_system | /gp/gpdata/primary/gpseg0
    3 |       1 | p    | 6000 | rhsdw2   | pg_system | /gp/gpdata/primary/gpseg1
    5 |       1 | m    | 7000 | rhsdw1   | pg_system | /gp/gpdata/mirror/gpseg1
(5 rows)

添加两台服务器地址信息如下:
192.168.15.205  rhsdw03
192.168.15.206  rhsdw04

2、系统调整

2.1、
systemctl stop firewalld.service
systemctl disable firewalld.service

 yum -y install psmisc-22.20-15.el7.x86_64 unzip
 
 sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux


2.2、新建lv
lvcreate -n gplum -L +10000M rhel
mkfs.xfs /dev/rhel/gplum
meta-data=/dev/rhel/gplum        isize=512    agcount=4, agsize=640000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2560000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


echo "/dev/mapper/rhel-gplum   /gp                       xfs     defaults        0 0" >> /etc/fstab
mount -a
df -hT
文件系统               类型      容量  已用  可用 已用% 挂载点
/dev/mapper/rhel-root  xfs       5.0G  1.1G  4.0G   22% /
devtmpfs               devtmpfs  908M     0  908M    0% /dev
tmpfs                  tmpfs     920M     0  920M    0% /dev/shm
tmpfs                  tmpfs     920M  8.5M  911M    1% /run
tmpfs                  tmpfs     920M     0  920M    0% /sys/fs/cgroup
/dev/sda1              xfs       197M  120M   77M   61% /boot
tmpfs                  tmpfs     184M     0  184M    0% /run/user/0
/dev/sr0               iso9660   4.4G  4.4G     0  100% /mnt
/dev/mapper/rhel-gplum xfs       9.8G   33M  9.8G    1% /gp


3、添加用户

3.1、添加用户及组


groupadd supergroup
groupadd gpadmin
useradd -m -d /home/gpadmin -g gpadmin -G gpadmin gpadmin

chown -R gpadmin:gpadmin /gp


3.2、
echo "/usr/local/lib" >>  /etc/ld.so.conf

cat <<eof >> /etc/security/limits.conf
 * soft nofile 65536
 * hard nofile 65536
 * soft nproc 131072
 * hard nproc 131072
eof
cat <<eof >> /etc/sysctl.conf
kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 500 1024000 200 4096
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 1025 65535
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.overcommit_memory = 2
eof
sysctl -p


3.4 配置时间同步
根据客户要求进行配置


3.5 调整CPU调度模式,调整deadline,rhel 7 3.6 默认为该模式

3.6 修改磁盘预读参数
blockdev --setra 65535 /dev/sda
echo "blockdev --setra 65535 /dev/sda" >> /etc/rc.d/rc.local

blockdev --getra /dev/sda


4.2、配置互信
[gpadmin@rhmdw config]$ pwd
/gp/greenplum-db/config
[gpadmin@rhmdw config]$ cat hostlist
rhmdw
rhsdw1
rhsdw2
rhsdw03
rhsdw04
[gpadmin@rhmdw config]$ gpssh-exkeys -f hostlist
[STEP 1 of 5] create local ID and authorize on local host
  ... /home/gpadmin/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts
  ... send to rhsdw1
  ... send to rhsdw2
  ... send to rhsdw03
  ... send to rhsdw04
  ***
  *** Enter password for rhsdw04:

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
  ... finished key exchange with rhsdw1
  ... finished key exchange with rhsdw2
  ... finished key exchange with rhsdw03
  ... finished key exchange with rhsdw04

[INFO] completed successfully

4.3、互信验证
[gpadmin@rhmdw config]$ gpssh -f /gp/app/config/hostlist -e 'pwd'
[  rhmdw] pwd
[  rhmdw] /home/gpadmin
[ rhsdw2] pwd
[ rhsdw2] /home/gpadmin
[ rhsdw1] pwd
[ rhsdw1] /home/gpadmin
[rhsdw04] pwd
[rhsdw04] /home/gpadmin
[rhsdw03] pwd
[rhsdw03] /home/gpadmin

4.4、拷贝gp软件,该步骤直接拷贝greenplum-db,后期会初始化失败,可增加app目录拷贝,
[gpadmin@rhmdw gp]$ ls -lrt
总用量 273752
lrwxrwxrwx.  1 gpadmin gpadmin         5 3月   5 11:20 greenplum-db -> ./app
drwxrwxr-x. 12 gpadmin gpadmin       152 3月   5 11:22 app
drwxrwxr-x.  3 gpadmin gpadmin        20 3月   5 11:37 gpadata
drwxrwxr-x.  5 gpadmin gpadmin        49 3月   5 13:16 gpdata
drwxrwxr-x.  3 gpadmin gpadmin        36 3月   5 15:12 gpcc
-rw-rw-r--.  1 gpadmin gpadmin 280318073 4月   4 09:54 green.tar.gz
[gpadmin@rhmdw gp]$ tar zcvf green.tar.gz greenplum-db/*

[gpadmin@rhsdw03 gp]$ ls -l
总用量 273752
drwxrwxr-x. 12 gpadmin gpadmin       152 4月   4 09:59 greenplum-db
-rw-rw-r--.  1 gpadmin gpadmin 280318073 4月   4 09:59 green.tar.gz
[gpadmin@rhsdw03 gp]$ tar -xzvf green.tar.gz
配置环境变量
echo "source /gp/greenplum-db/greenplum_path.sh" >> ~/.bash_profile

[gpadmin@rhsdw03 gp]$ rm green.tar.gz
[gpadmin@rhsdw03 gp]$ mkdir gpdata
[gpadmin@rhsdw03 gp]$ mkdir gpdata/mirror gpdata/primary

[gpadmin@rhsdw04 gp]$ rm green.tar.gz
[gpadmin@rhsdw04 gp]$ mkdir gpdata
[gpadmin@rhsdw04 gp]$ mkdir gpdata/mirror gpdata/primary

4.5、性能检查
网络性能
gpcheckperf -f /gp/app/config/hostlist -r N -d /tmp > checknetwork.out
磁盘性能检查
gpcheckperf -f /gp/app/config/hostlist -r ds -D -d /gp/gpdata/primary -d /gp/gpdata/mirror r > checkio.out
检查磁盘IO的目标目录是数据库的数据文件目录,测试工具会写入系统内存的两倍大小的数据量。
[gpadmin@rhmdw ~]$ gpcheckperf -f /gp/app/config/hostlist -r -N -d /tmp
/gp/greenplum-db/./bin/gpcheckperf -f /gp/app/config/hostlist -r -N -d /tmp

-------------------
--  NETPERF TEST
-------------------
[Warning] retrying with port 23012

====================
==  RESULT
====================
Netperf bisection bandwidth test
rhmdw -> rhsdw1 = 29.330000
rhsdw2 -> rhsdw03 = 203.220000
rhsdw04 -> rhmdw = 246.650000
rhsdw1 -> rhmdw = 241.580000
rhsdw03 -> rhsdw2 = 210.410000
rhmdw -> rhsdw04 = 148.580000

Summary:
sum = 1079.77 MB/sec
min = 29.33 MB/sec
max = 246.65 MB/sec
avg = 179.96 MB/sec
median = 210.41 MB/sec

[Warning] connection between rhmdw and rhsdw1 is no good
[Warning] connection between rhsdw2 and rhsdw03 is no good
[Warning] connection between rhsdw03 and rhsdw2 is no good
[Warning] connection between rhmdw and rhsdw04 is no good

5、初始化新扩展
[gpadmin@rhmdw config]$ pwd
/gp/greenplum-db/config
[gpadmin@rhmdw config]$ more seg_expand
rhsdw03
rhsdw04

5.1、增加一台服务器报错信息如下:
[gpadmin@rhmdw config]$ gpexpand -f seg_expand -D gaoyc
20190404:10:17:29:004796 gpexpand:rhmdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44'
20190404:10:17:29:004796 gpexpand:rhmdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Jan 16 2019 02:32:15'
20190404:10:17:29:004796 gpexpand:rhmdw:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state

System Expansion is used to add segments to an existing GPDB array.
gpexpand did not detect a System Expansion that is in progress.

Before initiating a System Expansion, you need to provision and burn-in
the new hardware.  Please be sure to run gpcheckperf to make sure the
new hardware is working properly.

Please refer to the Admin Guide for more information.

Would you like to initiate a new System Expansion Yy|Nn (default=N):
> y
20190404:10:17:35:004796 gpexpand:rhmdw:gpadmin-[ERROR]:-gpexpand failed: You must be adding two or more hosts when expanding a system with mirroring enabled.

Exiting...
20190404:10:17:35:004796 gpexpand:rhmdw:gpad

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值