Greenplum 6.0安装教程

一、Greenplum简介

Greenplum数据库系统体系结构的高级概述。

  • Greenplum数据库通过在多个服务器或主机之间分配负载来存储和处理大量数据。
  • Greenplum中的逻辑数据库是一组单独的PostgreSQL数据库,这些数据库协同工作以呈现单个数据库映像。该主是入口点,Greenplum的数据库系统。
  • 它是用户连接并提交SQL语句的数据库实例。
  • 主服务器协调系统中其他数据库实例(称为segment)的工作负载,这些实例处理数据处理和存储。这些段通过互连(Greenplum数据库的网络层)相互之间和主机进行通信 。
当前安装的测试环境如下所示:
主机名ip配置系统版本用途数据库安装位置数据安装目录
mdwh1172.16.101.1126c,16GCentOS Linux release 7.5.1804 (Core)master主节点/usr/local/greenplum-db/data/master
sdwh1172.16.101.1136c,16GCentOS Linux release 7.5.1804 (Core)segment节点,mirro节点/usr/local/greenplum-db/data/primary,/data/mirro
sdwh1172.16.101.1146c,16GCentOS Linux release 7.5.1804 (Core)segment节点,mirro节点/usr/local/greenplum-db/data/primary,/data/mirro

注意

二、配置系统(所有服务器都需要配置)

1. 确保您的主机系统满足平台要求中所述的要求。
2. 禁用SELinux和防火墙软件。
禁用SELinux

  1.以root用户身份检查SELinux的状态:

  [root@mdwh ~]# sestatus

  SELinux status: disabled

  2.如果未禁用SELinux,请通过编辑/etc/selinux/config文件:

  SELINUX=disabled

禁用防火墙

  systemctl stop firewalld && systemctl disable firewalld.service
3. 设置所需的操作系统参数。
  1. vi /etc/sysctl.conf
参考参数如下:
        
kernel.shmall = 4000000000
kernel.shmmax = 500000000
kernel.shmmni = 4096
vm.overcommit_memory = 2
vm.overcommit_ratio = 95 
net.ipv4.ip_local_port_range = 10000 65535
kernel.sem = 500 2048000 200 40960
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.swappiness = 10
vm.zone_reclaim_mode = 0
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
vm.dirty_background_ratio = 0 # See Note 5
vm.dirty_ratio = 0
vm.dirty_background_bytes = 1610612736
vm.dirty_bytes = 4294967296
  1. 修改立即生效
sysctl -p
  1. 修改文件数,进程数的限制vi /etc/security/limits.conf,在文件末尾新增以下内容
* soft nofile 524288
* hard nofile 524288
* soft nproc 131072
* hard nproc 131072
4. 创建ip和主机名映射:vi /etc/hosts
172.16.101.112 mdwh 
172.16.101.113 sdwh1
172.16.101.114 sdwh2
5. 创建gpadmin帐户。
groupadd -g 530 gpadmin 

useradd -g 530 -u 530 -m -d /home/gpadmin -s /bin/bash gpadmin 

chown -R gpadmin:gpadmin /home/gpadmin 

echo "gpadmin" | passwd --stdin gpadmin
6. 配置root和gpadmin用户SSH免密登录
root用户登录主节点:

1. ssh-keygen 生成公钥和私钥
2. 用ssh-copy-id将公钥复制到远程机器中
    ssh-copy-id sdwh1
    ssh-copy-id sdwh2

gpadmin用户登录主节点:

1. ssh-keygen 生成公钥和私钥
2. 用ssh-copy-id将公钥复制到远程机器中
    ssh-copy-id sdwh1
    ssh-copy-id sdwh2

三、安装Greenplum数据库

  1. 下载安装包,放到master节点服务器上
1.下载greenplum-db-6.0.0-rhel7-x86_64.rpm文件,下载地址https://github.com/greenplum-db/gpdb/releases/tag/6.0.0

2.yum install greenplum-db-6.0.0-rhel7-x86_64.rpm或者 
rpm -ivh greenplum-db-6.0.0-rhel7-x86_64.rpm
  1. master节点开始安装rpm包
[root@mdwh ~]# rpm -ivh greenplum-db-6.0.0-rhel7-x86_64.rpm 
    准备中...                          ################################# [100%]
    正在升级/安装...
       1:greenplum-db-6.0.0-1.el7         ################################# [100%]
  1. 设置环境变量
source /usr/local/greenplum-db/greenplum_path.sh
  1. 新建gpconfigs文件夹,存放gp安装过程中所使用到的配置文件
[root@mdwh ~]# mkdir /usr/local/greenplum-db-6.0.0/gpconfigs

[root@mdwh gpadmin]# cd /usr/local/greenplum-db-6.0.0/gpconfigs

allhosts: 所有服务器的主机名
seghosts: 所有Segment节点的主机名

[root@mdwh gpconfigs]# vi allhosts
    写入
        mdwh   
        sdwh1
        sdwh2

[root@mdwh gpconfigs]# vi seghosts
    写入
        sdwh1
        sdwh2
  1. 通过gpssh在所有的segment节点进行安装rpm包
  • 以下命令以重做ssh密钥交换
[root@mdwh gpconfigs]# gpssh-exkeys -f seghosts 
[STEP 1 of 5] create local ID and authorize on local host
  ... /root/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] retrieving credentials from remote hosts
  ... send to sdwh1
  ... send to sdwh2

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
  ... finished key exchange with sdwh1
  ... finished key exchange with sdwh2

[INFO] completed successfully
  • 执行具体的安装操作
1. 通过gpssh -f seghosts连接上所有的segment节点

    [root@mdwh gpconfigs]# gpssh -f seghosts 
    => pwd  
    [sdwh1] /root
    [sdwh2] /root
    => pwd
    [sdwh1] /root
    [sdwh2] /root
    
2. 从主机复制rpm包

    => scp mdwh:/root/greenplum-db-6.0.0-rhel7-x86_64.rpm ./
    greenplum-db-6.0.0-rhel7-x86_64.rpm             0%    0     0.0KB/s   --:-- greenplum-db-6.0.0-rhel7-x86_64.rpm             1% 2000KB   1.9MB/s   01:31 greenplum-db-6.0.0-rhel7-x86_64.rpm             6%   11MB   2.4MB/s   01:08 greenplum-db-6.0.0-rhel7-x86_64.rpm            91%  157MB  16.7MB/s   00:00 greenplum-db-6.0.0-rhel7-x86_64.rpm           100%  172MB  49.2MB/s   00:03    
    greenplum-db-6.0.0-rhel7-x86_64.rpm             0%    0     0.0KB/s   --:-- greenplum-db-6.0.0-rhel7-x86_64.rpm            30%   53MB  53.1MB/s   00:02 greenplum-db-6.0.0-rhel7-x86_64.rpm            44%   76MB  49.6MB/s   00:01 greenplum-db-6.0.0-rhel7-x86_64.rpm            54%   94MB  46.3MB/s   00:01 greenplum-db-6.0.0-rhel7-x86_64.rpm            75%  130MB  45.0MB/s   00:00 greenplum-db-6.0.0-rhel7-x86_64.rpm           100%  172MB  31.9MB/s   00:05    
    => ll

3. 安装rpm包 
    
    => rpm -ivh /root/greenplum-db-6.0.0-rhel7-x86_64.rpm
    [sdwh1] 准备中...                          ################################# [100%]
    [sdwh1] 正在升级/安装...
    [sdwh1]    1:greenplum-db-6.0.0-1.el7         ################################# [100%]
    [sdwh2] 准备中...                          ################################# [100%]
    [sdwh2] 正在升级/安装...
    [sdwh2]    1:greenplum-db-6.0.0-1.el7         ################################# [100%]
    => 
    => chown -R gpadmin:gpadmin /usr/local/greenplum*
    [sdwh1]
    [sdwh2]
    =>
  1. 在segment节点创建primary、mirro存储目录
=> mkdir /data/primary && chown -R gpadmin:gpadmin /data/primary
[sdwh1]
[sdwh2]
=> mkdir /data/mirror && chown -R gpadmin:gpadmin /data/mirror
[sdwh1]
[sdwh2]
=> exit
  1. 修改master节点的greenplum安装目录权限
[root@mdwh gpconfigs]# chown -R gpadmin:gpadmin /usr/local/greenplum*
  1. 切换gpadmin用户,进行greenplum数据初始化操作
1. 切换用户

    [root@mdwh gpconfigs]# su gpadmin

2. 从greenplum安装目录copy初始化文件
    
    [gpadmin@mdwh gpconfigs]$ cp /usr/local/greenplum-db/docs/cli_help/gpconfigs/gpinitsystem_config /usr/local/greenplum-db/gpconfigs/
    
3. 编辑gpinitsystem_config配置文件

    vim /usr/local/greenplum-db/gpconfigs/gpinitsystem_config
    
---
    本例配置文件如下:
    
    # FILE NAME: gpinitsystem_config

    # Configuration file needed by the gpinitsystem

    ################################################
    #### REQUIRED PARAMETERS
    ################################################

    #### Name of this Greenplum system enclosed in quotes.
    ARRAY_NAME="Greenplum Data Platform"

    #### Naming convention for utility-generated data directories.
    SEG_PREFIX=gpseg

    #### Base number by which primary segment port numbers 
    #### are calculated.
    PORT_BASE=6000

    #### File system location(s) where primary segment data directories 
    #### will be created. The number of locations in the list dictate
    #### the number of primary segments that will get created per
    #### physical host (if multiple addresses for a host are listed in 
    #### the hostfile, the number of segments will be spread evenly across
    #### the specified interface addresses).
    declare -a DATA_DIRECTORY=(/data/primary)

    #### OS-configured hostname or IP address of the master host.
    MASTER_HOSTNAME=mdwh

    #### File system location where the master data directory 
    #### will be created.
    MASTER_DIRECTORY=/data/master

    #### Port number for the master instance. 
    MASTER_PORT=5432

    #### Shell utility used to connect to remote hosts.
    TRUSTED_SHELL=ssh

    #### Maximum log file segments between automatic WAL checkpoints.
    CHECK_POINT_SEGMENTS=8

    #### Default server-side character set encoding.
    ENCODING=UNICODE

    ################################################
    #### OPTIONAL MIRROR PARAMETERS
    ################################################

    #### Base number by which mirror segment port numbers 
    #### are calculated.
    MIRROR_PORT_BASE=7000

    #### File system location(s) where mirror segment data directories 
    #### will be created. The number of mirror locations must equal the
    #### number of primary locations as specified in the 
    #### DATA_DIRECTORY parameter.
    declare -a MIRROR_DATA_DIRECTORY=(/data/mirror)


    ################################################
    #### OTHER OPTIONAL PARAMETERS
    ################################################

    #### Create a database of this name after initialization.
    #DATABASE_NAME=name_of_database

    #### Specify the location of the host address file here instead of
    #### with the the -h option of gpinitsystem.
    #MACHINE_LIST_FILE=/home/gpadmin/gpconfigs/hostfile_gpinitsystem
---

    配置文件详解:
    
    ###必需的参数
    # 系统名称
    ARRAY_NAME="Greenplum Data Platform"
    # 生成的数据目录的命名约定
    SEG_PREFIX=gpseg
    # segment所使用的端口
    PORT_BASE=6000 
    # segment数据目录所在的文件系统位置。列表中的位置数量表示将创建几个segment实例
    declare -a DATA_DIRECTORY=(/data1/primary /data1/primary /data1/primary /data2/primary /data2/primary /data2/primary)
    # master节点的主机名或IP地址
    MASTER_HOSTNAME=mdw 
    # master节点数据目录所在的文件系统位置
    MASTER_DIRECTORY=/data/master 
    # 主实例的端口号。
    MASTER_PORT=5432 
    # Shell utility used to connect to remote hosts.
    TRUSTED SHELL=ssh
    # Maximum log file segments between automatic WAL checkpoints.
    CHECK_POINT_SEGMENTS=8
    # 默认的服务器端字符集编码
    ENCODING=UNICODE

    ### 可选MIRROR参数
    # 镜像段端口号得到的基号
    MIRROR_PORT_BASE=7000
    # 镜像段数据目录所在的文件系统位置 列表中的位置数量表示将创建几个镜像实例
    declare -a MIRROR_DATA_DIRECTORY=(/data1/mirror /data1/mirror /data1/mirror /data2/mirror /data2/mirror /data2/mirror)

    ### 其他可选参数
    # 初始化后创建此名称的数据库
    # DATABASE_NAME=name_of_database

    # 在这里指定主机地址文件的位置,而不是使用gpinitsystem的-h选项
    #MACHINE_LIST_FILE=/home/gpadmin/gpconfigs/hostfile_gpinitsystem
    
4. 主节点环境变量设置

    [gpadmin@mdwh gpconfigs]$ vi ~/.bashrc
    
    export MASTER_DATA_DIRECTORY=/data/master/gpseg-1
    export PGPORT=5432
    export PGUSER=gpadmin
    export LD_PRELOAD=/lib64/libz.so.1 ps

    [gpadmin@mdwh gpconfigs]$ source ~/.bashrc

5. 执行初始化

    初始化命令:gpinitsystem -c gpinitsystem_config -h seghosts / -s sdwh2 -S
    
    安装过程日志记录:
    [gpadmin@mdwh gpconfigs]$ gpinitsystem -c gpinitsystem_config -h seghosts / -s sdwh2 -S
    20191016:18:41:09:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Checking configuration parameters, please wait...
    20191016:18:41:09:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Reading Greenplum configuration file gpinitsystem_config
    20191016:18:41:09:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Locale has not been set in gpinitsystem_config, will set to default value
    20191016:18:41:09:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Locale set to en_US.utf8
    20191016:18:41:09:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-No DATABASE_NAME set, will exit following template1 updates
    20191016:18:41:09:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-MASTER_MAX_CONNECT not set, will set to default value 250
    20191016:18:41:09:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Checking configuration parameters, Completed
    20191016:18:41:09:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Commencing multi-home checks, please wait...
    ..
    20191016:18:41:10:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Configuring build for standard array
    20191016:18:41:10:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Commencing multi-home checks, Completed
    20191016:18:41:10:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Building primary segment instance array, please wait...
    ..
    20191016:18:41:10:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Building group mirror array type , please wait...
    ..
    20191016:18:41:11:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Checking Master host
    20191016:18:41:11:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Checking new segment hosts, please wait...
    ....
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Checking new segment hosts, Completed
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Greenplum Database Creation Parameters
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:---------------------------------------
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master Configuration
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:---------------------------------------
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master instance name       = Greenplum Data Platform
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master hostname            = mdwh
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master port                = 5432
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master instance dir        = /data/master/gpseg-1
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master LOCALE              = en_US.utf8
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Greenplum segment prefix   = gpseg
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master Database            = 
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master connections         = 250
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master buffers             = 128000kB
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Segment connections        = 750
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Segment buffers            = 128000kB
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Checkpoint segments        = 8
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Encoding                   = UNICODE
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Postgres param file        = Off
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Initdb to be used          = /usr/local/greenplum-db/./bin/initdb
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-GP_LIBRARY_PATH is         = /usr/local/greenplum-db/./lib
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-HEAP_CHECKSUM is           = on
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-HBA_HOSTNAMES is           = 0
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Ulimit check               = Passed
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Array host connect type    = Single hostname per node
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master IP address [1]      = ::1
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master IP address [2]      = 172.16.101.112
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Master IP address [3]      = fe80::20c:29ff:fed3:e7a7
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Standby Master             = Not Configured
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Number of primary segments = 1
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Total Database segments    = 2
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Trusted shell              = ssh
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Number segment hosts       = 2
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Mirror port base           = 7000
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Number of mirror segments  = 1
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Mirroring config           = ON
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Mirroring type             = Group
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:----------------------------------------
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Greenplum Primary Segment Configuration
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:----------------------------------------
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-sdwh1        /data/primary/gpseg0    6000    2       0
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-sdwh2        /data/primary/gpseg1    6000    3       1
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:---------------------------------------
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Greenplum Mirror Segment Configuration
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:---------------------------------------
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-sdwh2        /data/mirror/gpseg0     7000    4       0
    20191016:18:41:14:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-sdwh1        /data/mirror/gpseg1     7000    5       1

    Continue with Greenplum creation Yy|Nn (default=N):
    > y
    20191016:18:41:18:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Building the Master instance database, please wait...
    20191016:18:41:25:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Starting the Master in admin mode
    20191016:18:41:26:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
    20191016:18:41:26:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
    ..
    20191016:18:41:26:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
    ............
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:------------------------------------------------
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Parallel process exit status
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:------------------------------------------------
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Total processes marked as completed           = 2
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Total processes marked as killed              = 0
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Total processes marked as failed              = 0
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:------------------------------------------------
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Deleting distributed backout files
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Removing back out file
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-No errors generated from parallel processes
    20191016:18:41:38:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode
    20191016:18:41:38:003860 gpstop:mdwh:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -m -d /data/master/gpseg-1
    20191016:18:41:38:003860 gpstop:mdwh:gpadmin-[INFO]:-Gathering information and validating the environment...
    20191016:18:41:38:003860 gpstop:mdwh:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
    20191016:18:41:38:003860 gpstop:mdwh:gpadmin-[INFO]:-Obtaining Segment details from master...
    20191016:18:41:38:003860 gpstop:mdwh:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 6.0.0 build commit:a2ff2fc318c327d702e66ab58ae4aff34c42296c'
    20191016:18:41:38:003860 gpstop:mdwh:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
    20191016:18:41:38:003860 gpstop:mdwh:gpadmin-[INFO]:-Master segment instance directory=/data/master/gpseg-1
    20191016:18:41:38:003860 gpstop:mdwh:gpadmin-[INFO]:-Stopping master segment and waiting for user connections to finish ...
    server shutting down
    20191016:18:41:39:003860 gpstop:mdwh:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
    20191016:18:41:39:003860 gpstop:mdwh:gpadmin-[INFO]:-Terminating processes for segment /data/master/gpseg-1
    20191016:18:41:39:003860 gpstop:mdwh:gpadmin-[ERROR]:-Failed to kill processes for segment /data/master/gpseg-1: ([Errno 3] No such process)
    20191016:18:41:40:003884 gpstart:mdwh:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /data/master/gpseg-1
    20191016:18:41:40:003884 gpstart:mdwh:gpadmin-[INFO]:-Gathering information and validating the environment...
    20191016:18:41:40:003884 gpstart:mdwh:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 6.0.0 build commit:a2ff2fc318c327d702e66ab58ae4aff34c42296c'
    20191016:18:41:40:003884 gpstart:mdwh:gpadmin-[INFO]:-Greenplum Catalog Version: '301908232'
    20191016:18:41:40:003884 gpstart:mdwh:gpadmin-[INFO]:-Starting Master instance in admin mode
    20191016:18:41:41:003884 gpstart:mdwh:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
    20191016:18:41:41:003884 gpstart:mdwh:gpadmin-[INFO]:-Obtaining Segment details from master...
    20191016:18:41:41:003884 gpstart:mdwh:gpadmin-[INFO]:-Setting new master era
    20191016:18:41:41:003884 gpstart:mdwh:gpadmin-[INFO]:-Master Started...
    20191016:18:41:41:003884 gpstart:mdwh:gpadmin-[INFO]:-Shutting down master
    20191016:18:41:42:003884 gpstart:mdwh:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
    .
    20191016:18:41:43:003884 gpstart:mdwh:gpadmin-[INFO]:-Process results...
    20191016:18:41:43:003884 gpstart:mdwh:gpadmin-[INFO]:-----------------------------------------------------
    20191016:18:41:43:003884 gpstart:mdwh:gpadmin-[INFO]:-   Successful segment starts                                            = 2
    20191016:18:41:43:003884 gpstart:mdwh:gpadmin-[INFO]:-   Failed segment starts                                                = 0
    20191016:18:41:43:003884 gpstart:mdwh:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
    20191016:18:41:43:003884 gpstart:mdwh:gpadmin-[INFO]:-----------------------------------------------------
    20191016:18:41:43:003884 gpstart:mdwh:gpadmin-[INFO]:-Successfully started 2 of 2 segment instances 
    20191016:18:41:43:003884 gpstart:mdwh:gpadmin-[INFO]:-----------------------------------------------------
    20191016:18:41:43:003884 gpstart:mdwh:gpadmin-[INFO]:-Starting Master instance mdwh directory /data/master/gpseg-1 
    20191016:18:41:54:003884 gpstart:mdwh:gpadmin-[INFO]:-Command pg_ctl reports Master mdwh instance active
    20191016:18:41:54:003884 gpstart:mdwh:gpadmin-[INFO]:-Connecting to dbname='template1' connect_timeout=15
    20191016:18:41:57:003884 gpstart:mdwh:gpadmin-[INFO]:-No standby master configured.  skipping...
    20191016:18:41:57:003884 gpstart:mdwh:gpadmin-[INFO]:-Database successfully started
    20191016:18:41:57:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
    20191016:18:41:57:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances
    20191016:18:41:57:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
    ..
    20191016:18:41:57:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
    ........
    20191016:18:42:05:001574 gpinitsystem:mdwh:gpadmin-[INFO]:------------------------------------------------
    20191016:18:42:05:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Parallel process exit status
    20191016:18:42:05:001574 gpinitsystem:mdwh:gpadmin-[INFO]:------------------------------------------------
    20191016:18:42:05:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Total processes marked as completed           = 2
    20191016:18:42:05:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Total processes marked as killed              = 0
    20191016:18:42:05:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Total processes marked as failed              = 0
    20191016:18:42:05:001574 gpinitsystem:mdwh:gpadmin-[INFO]:------------------------------------------------
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Scanning utility log file for any warning messages
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[WARN]:-*******************************************************
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[WARN]:-were generated during the array creation
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Please review contents of log file
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-/home/gpadmin/gpAdminLogs/gpinitsystem_20191016.log
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-To determine level of criticality
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-These messages could be from a previous run of the utility
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-that was called today!
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[WARN]:-*******************************************************
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Greenplum Database instance successfully created
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-------------------------------------------------------
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-To complete the environment configuration, please 
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/data/master/gpseg-1"
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-   to access the Greenplum scripts for this instance:
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-   or, use -d /data/master/gpseg-1 option for the Greenplum scripts
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-   Example gpstate -d /data/master/gpseg-1
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20191016.log
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-To initialize a Standby Master Segment for this Greenplum instance
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Review options for gpinitstandby
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-------------------------------------------------------
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-The Master /data/master/gpseg-1/pg_hba.conf post gpinitsystem
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-has been configured to allow all hosts within this new
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-new array must be explicitly added to this file
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-located in the /usr/local/greenplum-db/./docs directory
    20191016:18:42:06:001574 gpinitsystem:mdwh:gpadmin-[INFO]:-------------------------------------------------------

    日志中出现:"Greenplum Database instance successfully created"的信息输出,说明初始化成功~

9.激活standby
gpinitstandby -s sdwh2

[gpadmin@mdwh  gpseg-1]$ gpinitstandby -s sdwh2
20191020:21:45:10:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Checking for data directory /data/master/gpseg-1 on sdwh2
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:------------------------------------------------------
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Greenplum standby master initialization parameters
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:------------------------------------------------------
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Greenplum master hostname               = mdwh
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Greenplum master data directory         = /data/master/gpseg-1
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Greenplum master port                   = 5432
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Greenplum standby master hostname       = sdwh2
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Greenplum standby master port           = 5432
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Greenplum standby master data directory = /data/master/gpseg-1
20191020:21:45:11:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Greenplum update system catalog         = On
Do you want to continue with standby master initialization? Yy|Nn (default=N):
y
20191020:21:45:33:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
20191020:21:45:34:030881 gpinitstandby:mdwh:gpadmin-[WARNING]:-Syncing of Greenplum Database extensions has failed.
20191020:21:45:34:030881 gpinitstandby:mdwh:gpadmin-[WARNING]:-Please run gppkg --clean after successful standby initialization.
20191020:21:45:34:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Adding standby master to catalog...
20191020:21:45:34:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Database catalog updated successfully.
20191020:21:45:35:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Updating pg_hba.conf file...
20191020:21:45:37:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
20191020:21:46:01:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Starting standby master
20191020:21:46:01:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Checking if standby master is running on host: sdwh2  in directory: /data/master/gpseg-1
20191020:21:46:04:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
20191020:21:46:08:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
20191020:21:46:08:030881 gpinitstandby:mdwh:gpadmin-[INFO]:-Successfully created standby master on sdwh2

10.执行 gppkg --clean

20191021:08:54:43:006109 gppkg:mdwh:root-[INFO]:-Starting gppkg with args: --clean
20191021:08:54:44:006109 gppkg:mdwh:root-[INFO]:-The packages on sdwh2 are consistent.
20191021:08:54:44:006109 gppkg:mdwh:root-[INFO]:-The packages on sdwh1 are consistent.
20191021:08:54:44:006109 gppkg:mdwh:root-[INFO]:-Successfully cleaned the cluster
  1. 查看greenplum进程
[gpadmin@mdwh gpconfigs]$ ps aux | grep green
gpadmin   3928  7.3  1.3 483048 222600 ?       Ss   18:41   0:06 /usr/local/greenplum-db-6.0.0/bin/postgres -D /data/master/gpseg-1 -p 5432 -E
gpadmin   4684  0.0  0.0 114860   996 pts/0    S+   18:43   0:00 grep --color=auto green

12.查看standby状态 gpstate -f

[gpadmin@mdwh gpseg-1]$ gpstate -f

20191020:21:46:22:031017 gpstate:mdwh:gpadmin-[INFO]:-Starting gpstate with args: -f
20191020:21:46:25:031017 gpstate:mdwh:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.0.0 build commit:a2ff2fc318c327d702e66ab58ae4aff34c42296c'
20191020:21:46:25:031017 gpstate:mdwh:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.24 (Greenplum Database 6.0.0 build commit:a2ff2fc318c327d702e66ab58ae4aff34c42296c) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Aug 28 2019 21:29:36'
20191020:21:46:25:031017 gpstate:mdwh:gpadmin-[INFO]:-Obtaining Segment details from master...
^[[A20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:-Standby master details
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:-----------------------
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:-   Standby address          = sdwh2
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:-   Standby data directory   = /data/master/gpseg-1
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:-   Standby port             = 5432
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:-   Standby PID              = 20317
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:-   Standby status           = Standby host passive
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:--------------------------------------------------------------
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:--pg_stat_replication
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:--------------------------------------------------------------
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:--WAL Sender State: streaming
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:--Sync state: sync
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:--Sent Location: 0/C000000
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:--Flush Location: 0/C000000
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:--Replay Location: 0/C000000
20191020:21:46:26:031017 gpstate:mdwh:gpadmin-[INFO]:--------------------------------------------------------------

gpstate:mdwh:gpadmin-[INFO]:- Standby status = Standby host passive 表示激活成功

13.查看segment状态 gpstate -m

  1. 设置任何ip都可访问greenplum
1. 进入master节点服务器

2. 编辑pg_hba.conf

    [gpadmin@mdwh gpconfigs]$ vim /data/master/gpseg-1/pg_hba.conf 
    尾行加入:host     all        all         0.0.0.0/0     trust
3. 使用仅重新加载配置文件更改的greenplum重启命令
    
    [gpadmin@mdwh gpconfigs]$ gpstop -u
    20191016:20:35:51:005963 gpstop:mdwh:gpadmin-[INFO]:-Starting gpstop with args: -u
    20191016:20:35:51:005963 gpstop:mdwh:gpadmin-[INFO]:-Gathering information and validating the environment...
    20191016:20:35:51:005963 gpstop:mdwh:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
    20191016:20:35:51:005963 gpstop:mdwh:gpadmin-[INFO]:-Obtaining Segment details from master...
    20191016:20:35:52:005963 gpstop:mdwh:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 6.0.0 build commit:a2ff2fc318c327d702e66ab58ae4aff34c42296c'
    20191016:20:35:52:005963 gpstop:mdwh:gpadmin-[INFO]:-Signalling all postmaster processes to reload

4. 使用数据库连接工具连接greenplum数据库
    host:172.16.101.112
    port:5432
    initail Database:postgres
    username:gpadmin
    password:gpadmin

四、启动和停止Greenplum数据库

  1. 启动Greenplum数据库
gpstart
  1. 重新启动Greenplum数据库,停止Greenplum数据库系统,然后重新启动它。
gpstop -r
  1. 仅重新加载配置文件更改,将更改重新加载到Greenplum数据库配置文件,而不会中断系统。
gpstop -u
  1. 在维护模式下启动主机,仅启动主服务器以执行维护或管理任务,而不会影响段上的数据。
gpstart -m
  1. 停止Greenplum数据库
gpstop
  1. 要以快速模式停止Greenplum数据库
gpstop -M fast
  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值