VirtualBox虚拟机 GP 2+2节点安装

一、平台环境介绍

1、物理机:Win7、32位、4G内存、500G硬盘

2、虚拟机软件:VirtualBox4.3.4

     虚拟机OS:CentOS-6.5-i386

     4台虚拟机共享网关IP(VBox的虚拟网卡IP):192.168.56.1

     4台虚拟机配置如下:

类别主机名IPBcastMask分配内存磁盘大小创建目录目录属主说明
MasterinspA192.168.56.101192.168.56.255255.255.255.0512 MB50 GB/data/master
/file
gpadmin:gpadmin主节点
StandbyinspB192.168.56.102192.168.56.255255.255.255.0512 MB50 GB/data/master
/file
gpadmin:gpadmin从节点
Segment-1inspC192.168.56.103192.168.56.255255.255.255.0512 MB50 GB/data1
/data2
gpadmin:gpadmin数据节点-1
Segment-2inspD192.168.56.104192.168.56.255255.255.255.0512 MB50 GB/data1
/data2
gpadmin:gpadmin数据节点-2

 3、Greenplum安装介质: 

      greenplum-db-4.1.1.1-build-1-RHEL5-i386.zip

 

 

二、修改系统配置:(在每台虚拟机上操作)

1、修改/etc/sysctl.conf 如下,sysctl -p 生效

vm.overcommit_memory = 2
kernel.sem = 250 64000 100 512
kernel.shmmax = 68719476736
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
net.core.netdev_max_backlog = 10000
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.conf.default.arp_filter = 1
net.ipv4.ip_local_port_range = 1025 65535

 

2、禁用 Transparent Hugepage:

# 查看当前状态
[root@inspX ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

永久关闭

vim /etc/rc.local 添加如下内容

if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
   echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
fi

重启机器,查看状态。

cat /sys/kernel/mm/transparent_hugepage/enabled  
返回结果always madvise [never] 

 

3、设置安全限制参数:

在/etc/security/limits.conf中设置,用户退出重新登录生效。

* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072

注意:对RHEL6而言,上述可能不会生效,因为引入了/etc/security/limits.d/90-nproc.conf配置文件,需要修改后,退出重新登录才能生效。具体可阅读:http://blog.csdn.net/kumu_linux/article/details/8301760

 

 3、修改主机名、hosts文件:

(1)修改/etc/hosts文件如下:

192.168.56.101 inspA
192.168.56.102 inspB
192.168.56.103 inspC
192.168.56.104 inspD

 

(2)修改/etc/sysconfig/network文件如下:

NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=inspX

     注:这里X代表A、B、C、D

 

 4、添加gpadmin用户组、用户: 

[root@inspX ~]# groupadd gpadmin
[root@inspX ~]# useradd -g gpadmin gpadmin
[root@inspX ~]# passwd gpadmin

 

 5、关闭SELinux服务: 

# vim /etc/sysconfig/selinux

#设置方法:
#1. #SELINUX=enforcing     #注释掉
#2. #SELINUXTYPE=targeted  #注释掉
#3. SELINUX=disabled  #增加
#4. :wq  #保存,关闭。
#5. shutdown -r now   #重启


#完整修改后如下:

# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
#SELINUX=enforcing
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
#SELINUXTYPE=targeted
SELINUX=disabled

 

 6、允许root用户远程登陆服务器(PermitRootLogin): 

修改/etc/ssh/sshd_config文件,设置:PermitRootLogin yes,然后# service sshd restart

 

注:上述5、6两步,主要是为gpssh-exkeys做准备,以防交换密钥失败。

 

 

三、安装Greenplum数据库

root用户登录master主机进行安装。

1、将安装介质greenplum-db-4.1.1.1-build-1-RHEL5-i386.zip上传到/usr/local 目录下,解压后得到2个文件:

greenplum-db-4.1.1.1-build-1-RHEL5-i386.bin
README_INSTALL

 

2、执行安装:

[root@inspA local]# bash ./greenplum-db-4.1.1.1-build-1-RHEL5-i386.bin

(1)会出现大段的安装信息,并提示是否接受许可协议,输入"yes"即可。

Do you accept the Greenplum Database license agreement? [yes|no]

(2)接受默认安装路径/usr/local/greenplum-db-4.1.1.1

(3)另有信息,接受输入"yes"即可

(4)遇到其他默认内容,敲回车,跳过即可

 

3、安装完毕后,会看到自动生成软链接,指向 greenplum-db-4.1.1.1

[gpadmin@inspA local]$ ls -l  | grep -i greenplum
lrwxrwxrwx   1 gpadmin gpadmin        22 Aug  3 11:37 greenplum-db -> ./greenplum-db-4.1.1.1
drwxr-xr-x  11 gpadmin gpadmin      4096 Aug  3 11:37 greenplum-db-4.1.1.1

 

4、将greenplum目录的属主都设为gpadmin:

[root@inspA local]# chown -R gpadmin:gpadmin greenplum-db*

 

5、按同样的方法,在standby master主机上安装GPDB软件:

# 安装步骤,与master主机上的GPDB安装过程相同

 

6、使用gpssh-exkeys分别在root用户、gpadmin用户下交换密钥、建立互信:

[root@inspA greenplum-db]# gpssh-exkeys -f all_hosts
[gpadmin@inspA ~]$ gpssh-exkeys -f all_hosts

 

7、使用gpseginstall工具,在master上对各个segment host进行GPDB的安装:

先在master主机的gpadmin用户下设置环境变量(.bashrc):

if [ -f /usr/local/greenplum-db/greenplum_path.sh ]; then
        source /usr/local/greenplum-db/greenplum_path.sh
fi

export GPHOME=/usr/local/greenplum-db-4.x.x.x

然后以root用户,执行gpseginstall命令,对各个segment主机进行GPDB安装:

# source /usr/local/greenplum-db/greenplum_path.sh
# gpseginstall -f /home/gpadmin/gpconf/all_segs -u gpadmin -p password

 

 四、节点配置

1、gpadmin用户登录master主机,在/home/gpadmin目录下:

    (1)新建all_hosts文件,内容如下:

inspA
inspB
inspC
inspD

 

    (2)新建all_segs文件,内容如下:

inspC
inspD

 

2、设置各节点gpadmin用户环境变量:(每一台主机均操作)

在/home/gpadmin目录下,编辑.bashrc .bash_profile两个文件,添加以下内容:

if [ -f /usr/local/greenplum-db/greenplum_path.sh ]; then
        source /usr/local/greenplum-db/greenplum_path.sh
fi

export GPHOME=/usr/local/greenplum-db-4.x.x.x

MASTER_DATA_DIRECTORY=/data1/master/gpseg-1
export MASTER_DATA_DIRECTORY

 

并执行一下:

source ./.bashrc

 

3、测试各节点之间ssh互信:

测试各个节点之间是否畅通:

[gpadmin@inspA ~]$ gpssh -f ./all_hosts 
=> echo $HOSTNAME
[inspA] inspA
[inspC] inspC
[inspB] inspB
[inspD] inspD
=> 
=> date
[inspA] Sat Sep 20 16:24:27 CST 2014
[inspC] Sat Sep 20 16:24:27 CST 2014
[inspB] Sat Sep 20 16:24:27 CST 2014
[inspD] Sat Sep 20 16:24:27 CST 2014
=> 
=> pwd
[inspA] /home/gpadmin
[inspC] /home/gpadmin
[inspB] /home/gpadmin
[inspD] /home/gpadmin
=> 
=> hostname
[inspA] inspA
[inspC] inspC
[inspB] inspB
[inspD] inspD
=> 
=> quit

也可以单独测试各个节点:

[gpadmin@inspA ~]$ gpssh -h inspA
[gpadmin@inspA ~]$ gpssh -h inspB
[gpadmin@inspA ~]$ gpssh -h inspC
[gpadmin@inspA ~]$ gpssh -h inspD

 

4、同步时钟:

为了保证各个节点主机的时钟保持一致,特在master主机的root用户下,设置了crontab定时任务脚本gp_cron_ntp.sh,用来保持时钟同步。(master主机开启ntp服务并只从自身同步时间,其余节点关闭ntp服务并使用ntpdate从master同步时间)

if [ -f /usr/local/greenplum-db/greenplum_path.sh ]; then
        source /usr/local/greenplum-db/greenplum_path.sh
fi

gpssh -f /home/gpadmin/all_hosts -e 'ntpdate -u inspA'

sleep 1

gpssh -f /home/gpadmin/all_hosts -e 'date'

  crontab定时任务内容:

*/5 * * * * /root/shell/gp_cron_ntp.sh > /root/shell/gp_cron_ntp.log 2>&1

 

配置自动从master主机进行ntp时间同步的文章:

http://blog.csdn.net/bluishglc/article/details/41413031

①配置master主机的/etc/ntp.conf文件,其中涉及到ntp服务器的内容,改写如下:

#下面3行配置表示,只接受如下3台主机作为client进行ntp时间同步
restrict 192.168.56.102 mask 255.255.255.255 nomodify notrap
restrict 192.168.56.103 mask 255.255.255.255 nomodify notrap
restrict 192.168.56.104 mask 255.255.255.255 nomodify notrap

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
#下面2行配置表示,不从因特网上的ntp服务器同步时间,而是以本服务器自身的时间为准
server 127.127.1.0
fudge 127.127.1.0 stratum 8

②配置其余所有节点主机的/etc/ntp.conf文件如下:

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 192.168.56.101

③另外,集群的所有主机的ntpd服务要开启:

# chkconfig ntpd on
# service ntpd restart

注意:上述ntp配置,需要在关闭防火墙(iptables)服务的条件下,才能执行成功。

 

 


 5、检查OS配置是否满足要求:

[gpadmin@inspA ~]$ gpcheckos -f all_hosts

 

 6、关闭防火墙、SELINUX:

4个节点主机都进行操作

[root@inspX ~]# service iptables stop
[root@inspX ~]# chkconfig iptables off
[root@inspX ~]# service ip6tables stop
[root@inspX ~]# chkconfig ip6tables off
[root@inspX ~]# 

#修改/etc/sysconfig/selinux配置文件
SELINUX=disabled

#并执行如下命令

[root@inspX ~]# setenforce 0
[root@inspX ~]# getenforce

#可选操作:关闭不需要的服务
[root@inspX ~]# for SERVICES in abrtd acpid auditd cpuspeed haldaemon mdmonitor messagebus udev-post; do chkconfig ${SERVICES} off; done

 

7、配置初始化文件:

将/usr/local/greenplum-db-4.1.1.1/docs/cli_help/gpconfigs目录下的gpinitsystem_config配置文件,拷贝到/home/gpadmin/gpconf目录下,重命名为:gpinit_cfg

[gpadmin@inspA gpconf]$ cp /usr/local/greenplum-db-4.1.1.1/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconf/gpinit_cfg

编辑以下内容:

ARRAY_NAME="EMC Greenplum DW"
SEG_PREFIX=gpseg
PORT_BASE=40000
REPLICATION_PORT_BASE=41000
declare -a DATA_DIRECTORY=(/data1/primary /data2/primary)
MASTER_HOSTNAME=tpA
MASTER_DIRECTORY=/data1/master
MASTER_PORT=5432
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=8
ENCODING=UTF-8
MIRROR_PORT_BASE=50000
MIRROR_REPLICATION_PORT_BASE=51000
declare -a MIRROR_DATA_DIRECTORY=(/data1/mirror /data2/mirror)
DATABASE_NAME=bass_gp
MACHINE_LIST_FILE=/home/gpadmin/gpconf/all_segs

保存退出。

 

五、初始化Greenplum数据库:

  gpadmin用户执行。

  初始化之前,/data、/data1、/data2下面的二级目录要创建好,即:master主机的/data/master目录,    segment主机的/data*/primary、/data*/mirror,否则初始化会报错

[gpadmin@inspA ~]$ gpinitsystem  -c gpinit_cfg -s inspB --su_password=<password> -S

  ①大S选项表示打散mirror实例(即:将同一个host上的primary对应的mirror,分散到不同的host上面)。        这个选项的前提条件是:

    number of segment hosts is greater than the number of segment instances ( on each host ).

  ②该命令将有大量日志输出,请注意观察是否有报错要处理。

 

六、启停、检查数据库:

1、启动:

[gpadmin@inspA ~]$ gpstart

 

2、停止:

[gpadmin@inspA ~]$ gpstop

注意:停库之前,务必保证没有数据库连接存在,否则会失败。

  

 3、检查Greenplum数据库状态:

[gpadmin@inspA ~]$ gpstate
20140920:17:09:00:gpstate:inspA:gpadmin-[INFO]:-Starting gpstate with args: 
20140920:17:09:00:gpstate:inspA:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'
20140920:17:09:00:gpstate:inspA:gpadmin-[INFO]:-Obtaining Segment details from master...
20140920:17:09:00:gpstate:inspA:gpadmin-[INFO]:-Gathering data from segments...
..... 
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-Greenplum instance status summary
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-----------------------------------------------------
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Master instance                                           = Active
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Master standby                                            = inspB
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Standby master state                                      = Standby host passive
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total segment instance count from metadata                = 8
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-----------------------------------------------------
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Primary Segment Status
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-----------------------------------------------------
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total primary segments                                    = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total primary segment valid (at master)                   = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total primary segment failures (at master)                = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number postmaster processes found                   = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-----------------------------------------------------
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Mirror Segment Status
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-----------------------------------------------------
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total mirror segments                                     = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total mirror segment valid (at master)                    = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total mirror segment failures (at master)                 = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number postmaster processes found                   = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number mirror segments acting as primary segments   = 0
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-   Total number mirror segments acting as mirror segments    = 4
20140920:17:09:05:gpstate:inspA:gpadmin-[INFO]:-----------------------------------------------------

 

4、对于segment出现异常,导致状态down,需要用gprecoverseg进行恢复:

[gpadmin@inspA ~]$ gprecoverseg

 

 5、检查segments数量、配置:

select * from gp_segment_configuration
order by hostname,role
;

 

输出结果如下:

注意:客户端连接数据库,需要预先配置pg_hba.conf访问策略。

 

 

声明:原创文章,转载请注明出处。

作者:goopand

 

转载于:https://my.oschina.net/goopand/blog/343708

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值