0,Greenplum集群
节点组,4台服务器
主机名 | Ip地址 | 描述 |
Mdw1 | 192.168.13.111 | Centos6.5,主库master |
Smdw1 | 192.168.13.112 | Centos6.5,备库standby master |
Sdw1 | 192.168.13.113 | Centos6.5,segment库1 |
Sdw2 | 192.168.13.114 | Centos6.5,segment库2 |
1,修改系统参数/etc/sysctl.conf
|
2,修改系统参数/etc/security/limits.conf
Limits.conf末尾添加如下配置
* soft nofile 65536 * hard nofile 65536 * soft nproc 131072 * hard nproc 131072 * soft core unlimited |
3,关闭selinux
vim /etc/selinux/config SELINUX=disabled |
4,文件系统采用EXT4或者XFS
EXT4是第四代扩展文件系统(英语:Fourth EXtended filesystem,缩写为ext4)是Linux系统下的日志文件系统,是ext3文件系统的后继版本。
Ext4的文件系统容量达到1EB,而文件容量则达到16TB,这是一个非常大的数字了。对一般的台式机和服务器而言,这可能并不重要,但对于大型磁盘阵列的用户而言,这就非常重要了。
XFS是一个64位文件系统,最大支持8EB减1字节的单个文件系统,实际部署时取决于宿主操作系统的最大块限制。对于一个32位Linux系统,文件和文件系统的大小会被限制在16TB。
二者各有特点,而性能表现基本上是差不多的。例如,谷歌公司就考虑将EXT2系统升级,最终确定为EXT4系统。谷歌公司表示,他们还考虑过XFS和JFS。结果显示,EXT4和XFS的表现类似,不过从EXT2升级到EXT4比升级到XFS容易。
磁盘初始化,采用Mkfs.ext4 /dev/sdc1
5,磁盘访问策略
Linux磁盘I/O调度器对磁盘的访问支持不同的策略,默认的为CFQ,GP建议设置为deadline
查看磁盘的I/O调度策略,看到默认的为[cfq]
[root@dwhm01_2_111 ~]# cat /sys/block/ loop0/ loop2/ loop4/ loop6/ ram0/ ram10/ ram12/ ram14/ ram2/ ram4/ ram6/ ram8/ sda/ sdc/ loop1/ loop3/ loop5/ loop7/ ram1/ ram11/ ram13/ ram15/ ram3/ ram5/ ram7/ ram9/ sdb/ sr0/ [root@dwhm01_2_111 ~]# cat /sys/block/sdc/queue/scheduler noop anticipatory deadline [cfq] [root@dwhm01_2_111 ~]# |
在/boot/grub/menu.lst 文件里面关于kernel这一行的末尾添加elevator=deadline
[root@dwhm01_2_111 ~]# vim /boot/grub/menu.lst
# grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You do not have a /boot partition. This means that # all kernel and initrd paths are relative to /, eg. # root (hd0,0) # kernel /boot/vmlinuz-version ro root=/dev/sda1 # initrd /boot/initrd-[generic-]version.img #boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/boot/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.32-431.29.2.el6.x86_64) root (hd0,0) kernel /boot/vmlinuz-2.6.32-431.29.2.el6.x86_64 ro root=UUID=6d089360-3e14-401d-91d0-378f3fd09332 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM numa=off console=ttyS0 earlyprintk=ttyS0 rootdelay=300 elevator=deadline initrd /boot/initramfs-2.6.32-431.29.2.el6.x86_64.img |
6,磁盘配置read-ahead
每个磁盘设备文件需要配置read-ahead(blockdev)值为65536
要去看fdisk –l里面的/dev点,df –h中mount的不起作用
[root@dwhm01_2_111 ~]# blockdev --getra /dev/sdc1 256 [root@dwhm01_2_111 ~]# [root@dwhm01_2_111 ~]# blockdev --getra /data BLKRAGET: Inappropriate ioctl for device [root@dwhm01_2_111 ~]#
去 /etc/rc.d/rc.local里面添加/dev/sdc1的设置 [root@dwhm01_2_111 ~]# vim /etc/rc.d/rc.local
#!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff.
touch /var/lock/subsys/local blockdev --setra 65536 /dev/sdc1 for i in /sys/class/scsi_generic/*/device/timeout; do echo 900 > "$i"; done
# 重启服务器生效 [root@dwhm01_2_111 ~]# shutdown -r now
Broadcast message from adminuser@dwhm01_2_111 (/dev/pts/0) at 10:46 ...
The system is going down for reboot NOW! [root@dwhm01_2_111 ~]# Connection closed by foreign host.
Disconnected from remote host(192.168.13.111-m1) at 10:46:04.
Type `help' to learn how to use Xshell prompt. [c:\~]$
Connecting to 192.168.13.111:22... Connection established. To escape to local shell, press 'Ctrl+Alt+]'.
[root@dwhm01_2_111 ~]# # 查看已经生效,为65536了 [root@dwhm01_2_111 ~]# blockdev --getra /dev/sdc1 65536 [root@dwhm01_2_111 ~]#
# PS:也可以临时添加设置read-ahead的值,命令:blockdev --setra 65536 /dev/sdc1 |
7,设置hostname
[root@dwhm01_2_111 ~]# more /etc/sysconfig/network HOSTNAME= dwhm01_2_111 NETWORKING=yes [root@dwhm01_2_111 ~]#
[root@mdw1 greenplum]# more /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.13.111 dwhm01_2_111 mdw1 192.168.13.112 dwhs01_2_112 smdw1 192.168.13.113 dwhs02_2_113 sdw1 192.168.13.114 dwhs03_2_114 sdw2 [root@mdw1 greenplum]#
|
建立配置文件
(1)创建一个host_file,包含了Greenplum部署的所有主机名,内容如下 [root@mdw1 greenplum]# more host_file mdw1 smdw1 sdw1 sdw2 (2)创建一个hostfile_segonly,包含了所有的Segment Host的主机名,内容如下: [root@mdw1 greenplum]# more hostfile_exkeys mdw1 smdw1 sdw1 sdw2 (3)创建一个hostfile_exkeys,包含了所有的Greenplum主机的网口对应的主机名(因为有可能是双网卡的服务器),内如如下 [root@mdw1 greenplum]# more hostfile_segonly sdw1 sdw2 [root@mdw1 greenplum]# |
8,安装greenplum
8.1 下载greenplum
去官方下载地址,上面有最新的5.0.0,不过是rphl,因为操作系统是centos6.5,所以选择下载的应该是“”Greenplum Database4.3.12.0 for RedHat Entrerprise Linux 5, 6 and 7
C:\pic\greenplum\001.png
:
https://network.pivotal.io/products/pivotal-gpdb#/releases/4540/file_groups/560 |
解压:
[root@dwhm01_2_111 soft]# ll total 125060 -rw-r--r-- 1 root root 128061339 Apr 21 14:50 greenplum-db-4.3.12.0-rhel5-x86_64.zip [root@dwhm01_2_111 soft]# unzip greenplum-db-4.3.12.0-rhel5-x86_64.zip Archive: greenplum-db-4.3.12.0-rhel5-x86_64.zip inflating: greenplum-db-4.3.12.0-rhel5-x86_64.bin [root@dwhm01_2_111 soft]# ll total 252472 -rwxr-xr-x 1 root root 130467774 Feb 28 06:11 greenplum-db-4.3.12.0-rhel5-x86_64.bin -rw-r--r-- 1 root root 128061339 Apr 21 14:50 greenplum-db-4.3.12.0-rhel5-x86_64.zip [root@dwhm01_2_111 soft]# |
8.2 安装要求
Table 1. System Prerequisites for Greenplum Database 4.3 | |
Operating System | SUSE Linux Enterprise Server 11 SP2 CentOS 5.0 or higher Red Hat Enterprise Linux (RHEL) 5.0 or higher Oracle Unbreakable Linux 5.5 Note: See the Greenplum Database Release Notes for current supported platform information. |
File Systems | · xfs required for data storage on SUSE Linux and Red Hat (ext3 supported for root file system) |
Minimum CPU | Pentium Pro compatible (P3/Athlon and above) |
Minimum Memory | 16 GB RAM per server |
Disk Requirements | · 150MB per host for Greenplum installation · Approximately 300MB per segment instance for meta data · Appropriate free space for data with disks at no more than 70% capacity · High-speed, local storage |
Network Requirements | 10 Gigabit Ethernet within the array Dedicated, non-blocking switch |
Software and Utilities | bash shell GNU tars GNU zip GNU sed (used by Greenplum Database gpinitsystem) |
8.3 安装master开始安装
执行命令:/bin/bash greenplum-db-4.3.12.0-rhel5-x86_64.bin -y [root@dwhm01_2_111 yes]# /bin/bash greenplum-db-4.3.12.0-rhel5-x86_64.bin -y # 会有一些选项,需要再console里面输入yes,然后回车;执行结束后,会有一个yes目录,在创建安装目录的时候,输入/data/greenplum-db-4.3.12.0,之后yes,执行结束后,生产如下所示: [root@dwhm01_2_111 data]# ll total 127444 lrwxrwxrwx 1 root root 23 Apr 21 17:29 greenplum-db -> ./greenplum-db-4.3.12.0 drwxr-xr-x 11 root root 4096 Apr 21 17:41 greenplum-db-4.3.12.0 -rwxr-xr-x 1 root root 130467774 Apr 21 17:26 greenplum-db-4.3.12.0-rhel5-x86_64.bin [root@dwhm01_2_111 data]# [root@dwhm01_2_111 data]# ll greenplum-db-4.3.12.0 total 284 drwxr-xr-x 4 root root 4096 Feb 28 06:10 bin drwxr-xr-x 2 root root 4096 Feb 28 04:52 demo drwxr-xr-x 5 root root 4096 Feb 28 04:58 docs drwxr-xr-x 2 root root 4096 Feb 28 04:59 etc drwxr-xr-x 3 root root 4096 Feb 28 04:59 ext -rwxr-xr-x 1 root root 43025 Feb 28 05:06 GPDB-LICENSE.txt lrwxrwxrwx 1 root root 23 Apr 21 17:41 greenplum-db -> /usr/local/greenplum-db -rw-r--r-- 1 root root 731 Apr 21 17:29 greenplum_path.sh drwxr-xr-x 6 root root 4096 Feb 28 04:59 include drwxr-xr-x 10 root root 12288 Feb 28 05:06 lib -rwxr-xr-x 1 root root 192912 Feb 28 05:06 LICENSE.thirdparty drwxr-xr-x 2 root root 4096 Feb 28 06:10 sbin drwxr-xr-x 4 root root 4096 Feb 28 04:52 share [root@dwhm01_2_111 data]# |
之后在~/.bashrc末尾添加如下
# vim ~/.bashrc source /usr/local/greenplum-db/greenplum_path.sh |
做下软连接,并source下
[root@dwhm01_2_111 data]# ln -s /data/greenplum-db-4.3.12.0 /usr/local/greenplum-db [root@dwhm01_2_111 data]# su - [root@dwhm01_2_111 ~]# source /usr/local/greenplum-db/greenplum_path.sh [root@dwhm01_2_111 ~]# |
8.4 设置root无密码登陆
先在mdw1上设置公钥ssh-keygen -t rsa -P ''
[root@dwhm01_2_111 ~]# ssh-keygen -t rsa -P '' Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): /root/.ssh/id_rsa already exists. Overwrite (y/n)? y Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 29:15:e2:27:f2:6d:0e:ae:50:12:52:27:6a:d7:33:c9 root@dwhm01_2_111 The key's randomart image is: +--[ RSA 2048]----+ | o . . . | | o oo... . | |o....Eo o | |.... oo= . | | . . + S | | o . = | | . . . | | . . | | . | +-----------------+ [root@dwhm01_2_111 ~]# |
然后将公钥copy到smdw1、sdw1、sdw2,这样就做成了无密码登陆。
(1)先在mdw1上传输公钥到smdw1上面 [root@dwhm01_2_111 .ssh]# scp id_rsa.pub smdw1:/root/.ssh/id_rsa.pub_2_111 root@smdw1's password: id_rsa.pub 100% 399 0.4KB/s 00:00 [root@dwhm01_2_111 .ssh]#
(2)然后去smdw1上设置 [root@dwhs01_2_112 ~]# ssh-keygen -t rsa -P '' Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 96:e8:52:6e:8a:00:1a:26:3a:0f:9e:12:51:aa:8e:72 root@dwhs01_2_112 The key's randomart image is: +--[ RSA 2048]----+ | | | . | | o | |o . . | |=o o S | |O. + . | |O. . + | |=*E. + | |o+o . | +-----------------+ [root@dwhs01_2_112 ~]# cd .ssh [root@dwhs01_2_112 .ssh]# cat id_rsa.pub_2_111 >> authorized_keys [root@dwhs01_2_112 .ssh]#
(3)在mdw1上验证无密码登陆 [root@dwhm01_2_111 .ssh]# ssh smdw1 [root@dwhs01_2_112 ~]# exit logout Connection to smdw1 closed. [root@dwhm01_2_111 .ssh]#
(4)sdw1、sdw2也依照同样的办法设置免密码登陆。 |
8.5 建立多机信任
[root@dwhm01_2_111 greenplum_file]# gpssh-exkeys -f host_file [STEP 1 of 5] create local ID and authorize on local host ... /root/.ssh/id_rsa file exists ... key generation skipped
[STEP 2 of 5] keyscan all hosts and update known_hosts file
[STEP 3 of 5] authorize current user on remote hosts ... send to smdw1 ... send to sdw1 ... send to sdw2
[STEP 4 of 5] determine common authentication file content
[STEP 5 of 5] copy authentication files to all remote hosts ... finished key exchange with smdw1 ... finished key exchange with sdw1 ... finished key exchange with sdw2
[INFO] completed successfully [root@dwhm01_2_111 greenplum_file]# |
8.6 建立数据目录
在mdw1上和smdw1上操作
建立fstab配置 vim /etc/fstab UUID=6e04b4eb-9a7b-48d4-ba91-b752823b2a72 /data ext4 defaults,rw,noatime,inode64,allocsize=16m 1 2
在mdw1上建立数据目录 mkdir -p /data/master
|
8.7 建立用户和组
命令如下:
gpssh -f host_file =>groupadd -g 3030 gpadmin =>groupadd -g 3040 gpmon =>useradd -u 3030 -g gpadmin -m -s /bin/bash gpadmin =>useradd -u 3040 -g gpmon -m -s /bin/bash gpmon =>echo chys_0418 | passwd gpadmin --stdin =>echo chys_0418 | passwd gpmon --stdin =>chown -R gpadmin:gpadmin /data |
执行过程:
[root@dwhm01_2_111 greenplum_file]# gpssh -f host_file Note: command history unsupported on this machine ... => groupadd -g 3030 gpadmin [ mdw1] groupadd: group 'gpadmin' already exists [ sdw1] groupadd: group 'gpadmin' already exists [smdw1] groupadd: group 'gpadmin' already exists [ sdw2] groupadd: group 'gpadmin' already exists => groupadd -g 3040 gpmon [ mdw1] [ sdw1] [smdw1] [ sdw2] => useradd -u 3030 -g gpadmin -m -s /bin/bash gpadmin [ mdw1] useradd: user 'gpadmin' already exists [ sdw1] useradd: user 'gpadmin' already exists [smdw1] useradd: user 'gpadmin' already exists [ sdw2] useradd: user 'gpadmin' already exists => useradd -u 3040 -g gpmon -m -s /bin/bash gpmon [ mdw1] [ sdw1] [smdw1] [ sdw2] => echo chys_0418 | passwd gpadmin --stdin=> [ mdw1] Changing password for user gpadmin. [ mdw1] passwd: all authentication tokens updated successfully. [ sdw1] Changing password for user gpadmin. [ sdw1] passwd: all authentication tokens updated successfully. [smdw1] Changing password for user gpadmin. [smdw1] passwd: all authentication tokens updated successfully. [ sdw2] Changing password for user gpadmin. [ sdw2] passwd: all authentication tokens updated successfully. => echo chys_0418 | passwd gpmon --stdin [ mdw1] Changing password for user gpmon. [ mdw1] passwd: all authentication tokens updated successfully. [ sdw1] Changing password for user gpmon. [ sdw1] passwd: all authentication tokens updated successfully. [smdw1] Changing password for user gpmon. [smdw1] passwd: all authentication tokens updated successfully. [ sdw2] Changing password for user gpmon. [ sdw2] passwd: all authentication tokens updated successfully. => chown -R gpadmin:gpadmin /data [ mdw1] [ sdw1] [smdw1] [ sdw2] => |
8.8 修改gpadmin配置
【Master和Standby Master主机】:
修改 ~/.bashrc文件,添加如下内容:
source /usr/local/greenplum-db/greenplum_path.sh
MASTER_DATA_DIRECTORY=/data/master/gpseg-1
exportMASTER_DATA_DIRECTORY (gpstart默认启动的目录)
【Segment主机】:
修改 ~/.bashrc文件,添加如下内容:
source /usr/local/greenplum-db/greenplum_path.sh
8.9 设置时钟同步
检查时钟同步:
[root@dwhm01_2_111 greenplum_file]# gpssh -f host_file Note: command history unsupported on this machine ... => date [ mdw1] Fri Apr 21 20:20:47 CST 2017 [ sdw1] Fri Apr 21 20:20:47 CST 2017 [ sdw2] Fri Apr 21 20:20:47 CST 2017 [smdw1] Fri Apr 21 20:20:47 CST 2017 => |
8.10 关闭一些必要的服务
[root@dwhm01_2_111 greenplum_file]# gpssh -f host_file Note: command history unsupported on this machine ... => chkconfig avahi-daemon off chkconfig avahi-dnsconfd off chkconfig conman off chkconfig bluetooth off chkconfig cpuspeed off chkconfig setroubleshoot off chkconfig hidd off chkconfig hplip off chkconfig isdn off chkconfig kudzu off chkconfig yum-updatesd off …… |
9,在其他服务器上安装greenplumn
命令:gpseginstall -f hostfile_exkeys -u gpadmin -p chys_0418
执行过程如下:
[root@dwhm01_2_111 greenplum_file]# gpseginstall -f hostfile_exkeys -u gpadmin -p chys_0418 20170421:20:25:53:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-Installation Info: link_name greenplum-db binary_path /data/greenplum-db-4.3.12.0 binary_dir_location /data binary_dir_name greenplum-db-4.3.12.0 20170421:20:25:53:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-check cluster password access 20170421:20:25:54:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-de-duplicate hostnames 20170421:20:25:54:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-master hostname: dwhm01_2_111 20170421:20:25:54:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-check for user gpadmin on cluster 20170421:20:25:55:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-add user gpadmin on master 20170421:20:25:55:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-add user gpadmin on cluster 20170421:20:25:55:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-chown -R gpadmin:gpadmin /data/greenplum-db 20170421:20:25:55:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-chown -R gpadmin:gpadmin /data/greenplum-db-4.3.12.0 20170421:20:25:55:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-rm -f /data/greenplum-db-4.3.12.0.tar; rm -f /data/greenplum-db-4.3.12.0.tar.gz 20170421:20:25:55:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-cd /data; tar cf greenplum-db-4.3.12.0.tar greenplum-db-4.3.12.0 20170421:20:25:57:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-gzip /data/greenplum-db-4.3.12.0.tar 20170421:20:26:29:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: mkdir -p /data 20170421:20:26:29:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: rm -rf /data/greenplum-db-4.3.12.0 20170421:20:26:30:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-scp software to remote location 20170421:20:26:34:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: gzip -f -d /data/greenplum-db-4.3.12.0.tar.gz 20170421:20:26:40:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-md5 check on remote location 20170421:20:26:41:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: cd /data; tar xf greenplum-db-4.3.12.0.tar 20170421:20:26:43:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: rm -f /data/greenplum-db-4.3.12.0.tar 20170421:20:26:43:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: cd /data; rm -f greenplum-db; ln -fs greenplum-db-4.3.12.0 greenplum-db 20170421:20:26:44:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /data/greenplum-db 20170421:20:26:44:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /data/greenplum-db-4.3.12.0 20170421:20:26:45:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-rm -f /data/greenplum-db-4.3.12.0.tar.gz 20170421:20:26:45:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-Changing system passwords ... 20170421:20:26:47:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-exchange ssh keys for user root 20170421:20:26:50:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-exchange ssh keys for user gpadmin 20170421:20:26:54:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-/data/greenplum-db/./sbin/gpfixuserlimts -f /etc/security/limits.conf -u gpadmin 20170421:20:26:54:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: . /data/greenplum-db/./greenplum_path.sh; /data/greenplum-db/./sbin/gpfixuserlimts -f /etc/security/limits.conf -u gpadmin 20170421:20:26:55:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-version string on master: gpssh version 4.3.12.0 build 1 20170421:20:26:55:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: . /data/greenplum-db/./greenplum_path.sh; /data/greenplum-db/./bin/gpssh --version 20170421:20:26:56:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-remote command: . /data/greenplum-db-4.3.12.0/greenplum_path.sh; /data/greenplum-db-4.3.12.0/bin/gpssh --version 20170421:20:27:02:004498 gpseginstall:dwhm01_2_111:root-[INFO]:-SUCCESS -- Requested commands completed [root@dwhm01_2_111 greenplum_file]# |
建立软连接:
[root@dwhm01_2_111 greenplum_file]# gpssh -f hostfile_nm -e "ln -s /data/greenplum-db-4.3.12.0 /usr/local/greenplum-db" [ sdw2] ln -s /data/greenplum-db-4.3.12.0 /usr/local/greenplum-db [smdw1] ln -s /data/greenplum-db-4.3.12.0 /usr/local/greenplum-db [ sdw1] ln -s /data/greenplum-db-4.3.12.0 /usr/local/greenplum-db [root@dwhm01_2_111 greenplum_file]# |
10 系统检查
10.1 确认安装
[root@dwhm01_2_111 greenplum_file]# su - gpadmin [gpadmin@dwhm01_2_111 ~]$ source /usr/local/greenplum-db/greenplum_path.sh [gpadmin@dwhm01_2_111 ~]$ cd /data/greenplum_file/ [gpadmin@dwhm01_2_111 greenplum_file]$ gpssh -f host_file -e ls -l $GPHOME [ mdw1] ls -l /data/greenplum-db/. [ mdw1] total 284 [ mdw1] drwxr-xr-x 4 gpadmin gpadmin 4096 Feb 28 06:10 bin [ mdw1] drwxr-xr-x 2 gpadmin gpadmin 4096 Feb 28 04:52 demo [ mdw1] drwxr-xr-x 5 gpadmin gpadmin 4096 Feb 28 04:58 docs [ mdw1] drwxr-xr-x 2 gpadmin gpadmin 4096 Feb 28 04:59 etc [ mdw1] drwxr-xr-x 3 gpadmin gpadmin 4096 Feb 28 04:59 ext [ mdw1] -rwxr-xr-x 1 gpadmin gpadmin 43025 Feb 28 05:06 GPDB-LICENSE.txt [ mdw1] lrwxrwxrwx 1 gpadmin gpadmin 23 Apr 21 17:41 greenplum-db -> /usr/local/greenplum-db [ mdw1] -rw-r--r-- 1 gpadmin gpadmin 731 Apr 21 17:29 greenplum_path.sh [ mdw1] drwxr-xr-x 6 gpadmin gpadmin 4096 Feb 28 04:59 include [ mdw1] drwxr-xr-x 10 gpadmin gpadmin 12288 Feb 28 05:06 lib [ mdw1] -rwxr-xr-x 1 gpadmin gpadmin 192912 Feb 28 05:06 LICENSE.thirdparty [ mdw1] drwxr-xr-x 2 gpadmin gpadmin 4096 Feb 28 06:10 sbin [ mdw1] drwxr-xr-x 4 gpadmin gpadmin 4096 Feb 28 04:52 share [ sdw1] ls -l /data/greenplum-db/. [ sdw1] total 284 [ sdw1] drwxr-xr-x. 4 gpadmin gpadmin 4096 Feb 28 06:10 bin [ sdw1] drwxr-xr-x. 2 gpadmin gpadmin 4096 Feb 28 04:52 demo [ sdw1] drwxr-xr-x. 5 gpadmin gpadmin 4096 Feb 28 04:58 docs [ sdw1] drwxr-xr-x. 2 gpadmin gpadmin 4096 Feb 28 04:59 etc [ sdw1] drwxr-xr-x. 3 gpadmin gpadmin 4096 Feb 28 04:59 ext [ sdw1] -rwxr-xr-x. 1 gpadmin gpadmin 43025 Feb 28 05:06 GPDB-LICENSE.txt [ sdw1] lrwxrwxrwx. 1 gpadmin gpadmin 23 Apr 21 20:26 greenplum-db -> /usr/local/greenplum-db [ sdw1] -rw-r--r--. 1 gpadmin gpadmin 731 Apr 21 17:29 greenplum_path.sh [ sdw1] drwxr-xr-x. 6 gpadmin gpadmin 4096 Feb 28 04:59 include [ sdw1] drwxr-xr-x. 10 gpadmin gpadmin 12288 Feb 28 05:06 lib [ sdw1] -rwxr-xr-x. 1 gpadmin gpadmin 192912 Feb 28 05:06 LICENSE.thirdparty [ sdw1] drwxr-xr-x. 2 gpadmin gpadmin 4096 Feb 28 06:10 sbin [ sdw1] drwxr-xr-x. 4 gpadmin gpadmin 4096 Feb 28 04:52 share [smdw1] ls -l /data/greenplum-db/. [smdw1] total 284 [smdw1] drwxr-xr-x. 4 gpadmin gpadmin 4096 Feb 28 06:10 bin [smdw1] drwxr-xr-x. 2 gpadmin gpadmin 4096 Feb 28 04:52 demo [smdw1] drwxr-xr-x. 5 gpadmin gpadmin 4096 Feb 28 04:58 docs [smdw1] drwxr-xr-x. 2 gpadmin gpadmin 4096 Feb 28 04:59 etc [smdw1] drwxr-xr-x. 3 gpadmin gpadmin 4096 Feb 28 04:59 ext [smdw1] -rwxr-xr-x. 1 gpadmin gpadmin 43025 Feb 28 05:06 GPDB-LICENSE.txt [smdw1] lrwxrwxrwx. 1 gpadmin gpadmin 23 Apr 21 20:26 greenplum-db -> /usr/local/greenplum-db [smdw1] -rw-r--r--. 1 gpadmin gpadmin 731 Apr 21 17:29 greenplum_path.sh [smdw1] drwxr-xr-x. 6 gpadmin gpadmin 4096 Feb 28 04:59 include [smdw1] drwxr-xr-x. 10 gpadmin gpadmin 12288 Feb 28 05:06 lib [smdw1] -rwxr-xr-x. 1 gpadmin gpadmin 192912 Feb 28 05:06 LICENSE.thirdparty [smdw1] drwxr-xr-x. 2 gpadmin gpadmin 4096 Feb 28 06:10 sbin [smdw1] drwxr-xr-x. 4 gpadmin gpadmin 4096 Feb 28 04:52 share [ sdw2] ls -l /data/greenplum-db/. [ sdw2] total 284 [ sdw2] drwxr-xr-x. 4 gpadmin gpadmin 4096 Feb 28 06:10 bin [ sdw2] drwxr-xr-x. 2 gpadmin gpadmin 4096 Feb 28 04:52 demo [ sdw2] drwxr-xr-x. 5 gpadmin gpadmin 4096 Feb 28 04:58 docs [ sdw2] drwxr-xr-x. 2 gpadmin gpadmin 4096 Feb 28 04:59 etc [ sdw2] drwxr-xr-x. 3 gpadmin gpadmin 4096 Feb 28 04:59 ext [ sdw2] -rwxr-xr-x. 1 gpadmin gpadmin 43025 Feb 28 05:06 GPDB-LICENSE.txt [ sdw2] lrwxrwxrwx. 1 gpadmin gpadmin 23 Apr 21 20:26 greenplum-db -> /usr/local/greenplum-db [ sdw2] -rw-r--r--. 1 gpadmin gpadmin 731 Apr 21 17:29 greenplum_path.sh [ sdw2] drwxr-xr-x. 6 gpadmin gpadmin 4096 Feb 28 04:59 include [ sdw2] drwxr-xr-x. 10 gpadmin gpadmin 12288 Feb 28 05:06 lib [ sdw2] -rwxr-xr-x. 1 gpadmin gpadmin 192912 Feb 28 05:06 LICENSE.thirdparty [ sdw2] drwxr-xr-x. 2 gpadmin gpadmin 4096 Feb 28 06:10 sbin [ sdw2] drwxr-xr-x. 4 gpadmin gpadmin 4096 Feb 28 04:52 share [gpadmin@dwhm01_2_111 greenplum_file]$ |
PS:
如果成功登录到所有主机并且未提示输入密码,安装没有问题。所有主机在安装路径显示相同的内容,且目录的所有权为gpadmin用户。如果提示输入密码,执行下面的命令重新交换SSH密钥:
$ gpssh-exkeys -f host_file
10.2 检查系统参数
检查命令:gpcheck -f host_file -m mdw -ssmdw
执行过程:
[root@dwhm01_2_111 greenplum_file]# gpcheck -f host_file -m mdw -s smdw 20170426:16:41:51:023514 gpcheck:dwhm01_2_111:root-[INFO]:-dedupe hostnames 20170426:16:41:51:023514 gpcheck:dwhm01_2_111:root-[INFO]:-Detected platform: Generic Linux Cluster 20170426:16:41:51:023514 gpcheck:dwhm01_2_111:root-[INFO]:-generate data on servers 20170426:16:42:07:023514 gpcheck:dwhm01_2_111:root-[INFO]:-copy data files from servers 20170426:16:42:07:023514 gpcheck:dwhm01_2_111:root-[INFO]:-delete remote tmp files 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[INFO]:-Using gpcheck config file: /data/greenplum-db/./etc/gpcheck.cnf 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs01_2_112): on device (/dev/sdb1) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs01_2_112): on device (/dev/sdc1) blockdev readahead value '65536' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs01_2_112): on device (/dev/sdc) blockdev readahead value '65536' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs01_2_112): on device (/dev/sdb) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs01_2_112): on device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs01_2_112): on device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs01_2_112): on device (/dev/sda) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs01_2_112): /etc/sysctl.conf value for key 'kernel.shmmax' has value '5000000000' and expects '500000000' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs01_2_112): /etc/sysctl.conf value for key 'kernel.sem' has value '250 5120000 100 20480' and expects '250 512000 100 2048' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs01_2_112): /etc/sysctl.conf value for key 'kernel.shmall' has value '40000000000' and expects '4000000000' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs02_2_113): on device (/dev/sdb1) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs02_2_113): on device (/dev/sdc1) blockdev readahead value '65536' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs02_2_113): on device (/dev/sdc) blockdev readahead value '65536' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs02_2_113): on device (/dev/sdb) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs02_2_113): on device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs02_2_113): on device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs02_2_113): on device (/dev/sda) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs02_2_113): /etc/sysctl.conf value for key 'kernel.shmmax' has value '5000000000' and expects '500000000' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs02_2_113): /etc/sysctl.conf value for key 'kernel.sem' has value '250 5120000 100 20480' and expects '250 512000 100 2048' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs02_2_113): /etc/sysctl.conf value for key 'kernel.shmall' has value '40000000000' and expects '4000000000' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhm01_2_111): on device (/dev/sdb1) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhm01_2_111): on device (/dev/sdc1) blockdev readahead value '65536' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhm01_2_111): on device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhm01_2_111): on device (/dev/sdb) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhm01_2_111): on device (/dev/sdc) blockdev readahead value '65536' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhm01_2_111): on device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhm01_2_111): on device (/dev/sda) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhm01_2_111): /etc/sysctl.conf value for key 'kernel.shmmax' has value '5000000000' and expects '500000000' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhm01_2_111): /etc/sysctl.conf value for key 'kernel.sem' has value '250 5120000 100 20480' and expects '250 512000 100 2048' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhm01_2_111): /etc/sysctl.conf value for key 'kernel.shmall' has value '40000000000' and expects '4000000000' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs03_2_114): on device (/dev/sdb1) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs03_2_114): on device (/dev/sdc1) blockdev readahead value '65536' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs03_2_114): on device (/dev/sdc) blockdev readahead value '65536' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs03_2_114): on device (/dev/sdb) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs03_2_114): on device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs03_2_114): on device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs03_2_114): on device (/dev/sda) blockdev readahead value '256' does not match expected value '16384' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs03_2_114): /etc/sysctl.conf value for key 'kernel.shmmax' has value '5000000000' and expects '500000000' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs03_2_114): /etc/sysctl.conf value for key 'kernel.sem' has value '250 5120000 100 20480' and expects '250 512000 100 2048' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[ERROR]:-GPCHECK_ERROR host(dwhs03_2_114): /etc/sysctl.conf value for key 'kernel.shmall' has value '40000000000' and expects '4000000000' 20170426:16:42:08:023514 gpcheck:dwhm01_2_111:root-[INFO]:-gpcheck completing... [root@dwhm01_2_111 greenplum_file]# |
10.3 检查网络性能
gpcheckperf-f host_file -r N -d /tmp > checknetwork.out
[root@dwhm01_2_111 greenplum_file]# gpcheckperf -f hostfile_exkeys -r N -d /tmp > subnet1.out /root/.bashrc: line 13: /usr/local/greenplum-db/greenplum_path.sh: No such file or directory /root/.bashrc: line 13: /usr/local/greenplum-db/greenplum_path.sh: No such file or directory /root/.bashrc: line 13: /usr/local/greenplum-db/greenplum_path.sh: No such file or directory [root@dwhm01_2_111 greenplum_file]#
[root@dwhm01_2_111 greenplum_file]# more checknetwork.out /data/greenplum-db/./bin/gpcheckperf -f host_file -r N -d /tmp
------------------- -- NETPERF TEST -------------------
==================== == RESULT ==================== Netperf bisection bandwidth test mdw1 -> smdw1 = 115.100000 sdw1 -> sdw2 = 113.350000 smdw1 -> mdw1 = 115.310000 sdw2 -> sdw1 = 115.340000
Summary: sum = 459.10 MB/sec min = 113.35 MB/sec max = 115.34 MB/sec avg = 114.78 MB/sec median = 115.31 MB/sec
[root@dwhm01_2_111 greenplum_file]# |
10.4 检查磁盘IO内存带宽
11,配置
创建gpconfig目录,copy模板文件:
su - gpadmin vim .bash_profile export GPHOME=/usr/local/greenplum-db
mkdir -p $GPHOME/gpconfig cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config $GPHOME/gpconfigs/
|
建立配置目录文件夹
[gpadmin@dwhm01_2_111 greenplum_file]$ gpssh -f host_file Note: command history unsupported on this machine ... =>mkdir -p /data/data1/primary =>mkdir -p /data/data2/primary => mkdir -p /data/master |
创建segment服务器文件:
[root@dwhm01_2_111 greenplum_file]# more hostfile_gpssh_segonly sdw1 sdw2 [root@dwhm01_2_111 greenplum_file]# |
添加配置文件
vim /usr/local/greenplum-db/gpconfig/gpinitsystem_config ARRAY_NAME="EMC Greenplum DW" SEG_PREFIX=gpseg PORT_BASE=40000 declare -a DATA_DIRECTORY=(/data/data1/primary /data/data1/primary)
MASTER_HOSTNAME=dwhm01_2_111 MASTER_DIRECTORY=/data/master MASTER_PORT=5432 TRUSTED_SHELL=ssh CHECK_POINT_SEGMENTS=8 ENCODING=UNICODE MIRROR_PORT_BASE=50000 REPLICATION_PORT_BASE=41000 MIRROR_REPLICATION_PORT_BASE=51000 declare -a MIRROR_DATA_DIRECTORY=(/data/data1/mirror /data/data1/mirror) |
参考文档:http://gpdb.docs.pivotal.io/43120/install_guide/init_gpdb.html
12,初始化数据库
12.1初始化命令
执行命令,hostfile_segonly是segment服务器列表,一行存一个hostname,如果是双网卡就存2行,后面的smdw1是standby master的hostname名字,在gpadmin账号下运行命令:
gpinitsystem -c/usr/local/greenplum-db/gpconfig/gpinitsystem_config -h hostfile_segonly -s smdw1 -S
12.2 执行过程
[gpadmin@dwhm01_2_111 greenplum_file]$ gpinitsystem -c /usr/local/greenplum-db/gpconfig/gpinitsystem_config -h hostfile_segonly -s smdw1 -S 20170427:15:37:02:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Checking configuration parameters, please wait... 20170427:15:37:02:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Reading Greenplum configuration file /usr/local/greenplum-db/gpconfig/gpinitsystem_config 20170427:15:37:02:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Locale has not been set in /usr/local/greenplum-db/gpconfig/gpinitsystem_config, will set to default value 20170427:15:37:02:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Locale set to en_US.utf8 20170427:15:37:02:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-No DATABASE_NAME set, will exit following template1 updates 20170427:15:37:02:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-MASTER_MAX_CONNECT not set, will set to default value 250 20170427:15:37:03:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Checking configuration parameters, Completed 20170427:15:37:03:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Commencing multi-home checks, please wait... .. 20170427:15:37:03:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Configuring build for standard array 20170427:15:37:03:062124 gpinitsystem:dwhm01_2_111:gpadmin-[WARN]:-Option -S supplied, but no mirrors have been defined, ignoring -S option 20170427:15:37:03:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Commencing multi-home checks, Completed 20170427:15:37:03:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Building primary segment instance array, please wait... .... 20170427:15:37:04:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Checking Master host 20170427:15:37:04:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Checking new segment hosts, please wait... .... 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Checking new segment hosts, Completed 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Greenplum Database Creation Parameters 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:--------------------------------------- 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master Configuration 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:--------------------------------------- 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master instance name = EMC Greenplum DW 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master hostname = dwhm01_2_111 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master port = 5432 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master instance dir = /data/master/gpseg-1 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master LOCALE = en_US.utf8 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Greenplum segment prefix = gpseg 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master Database = 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master connections = 250 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master buffers = 128000kB 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Segment connections = 750 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Segment buffers = 128000kB 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Checkpoint segments = 8 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Encoding = UNICODE 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Postgres param file = Off 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Initdb to be used = /usr/local/greenplum-db/bin/initdb 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-GP_LIBRARY_PATH is = /usr/local/greenplum-db/lib 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Ulimit check = Passed 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Array host connect type = Single hostname per node 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master IP address [1] = ::1 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master IP address [2] = 192.168.13.111 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Master IP address [3] = fe80::217:faff:fe00:9565 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Standby Master = smdw1 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Primary segment # = 2 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Standby IP address = ::1 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Standby IP address = 192.168.13.112 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Standby IP address = fe80::217:faff:fe00:9248 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Total Database segments = 4 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Trusted shell = ssh 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Number segment hosts = 2 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Mirroring config = OFF 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:---------------------------------------- 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Greenplum Primary Segment Configuration 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:---------------------------------------- 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-sdw1 /data/data1/primary/gpseg0 40000 2 0 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-sdw1 /data/data1/primary/gpseg1 40001 3 1 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-sdw2 /data/data1/primary/gpseg2 40000 4 2 20170427:15:37:08:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-sdw2 /data/data1/primary/gpseg3 40001 5 3 Continue with Greenplum creation Yy/Nn> y 20170427:15:37:12:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Building the Master instance database, please wait... 20170427:15:37:22:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Starting the Master in admin mode 20170427:15:37:29:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Commencing parallel build of primary segment instances 20170427:15:37:29:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait... .... 20170427:15:37:29:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait... ..................... 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:------------------------------------------------ 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Parallel process exit status 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:------------------------------------------------ 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Total processes marked as completed = 4 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Total processes marked as killed = 0 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Total processes marked as failed = 0 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:------------------------------------------------ 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Deleting distributed backout files 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Removing back out file 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-No errors generated from parallel processes 20170427:15:37:50:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Starting initialization of standby master smdw1 20170427:15:37:50:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Validating environment and parameters for standby initialization... 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Checking for filespace directory /data/master/gpseg-1 on smdw1 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:------------------------------------------------------ 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Greenplum standby master initialization parameters 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:------------------------------------------------------ 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Greenplum master hostname = dwhm01_2_111 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Greenplum master data directory = /data/master/gpseg-1 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Greenplum master port = 5432 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Greenplum standby master hostname = smdw1 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Greenplum standby master port = 5432 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Greenplum standby master data directory = /data/master/gpseg-1 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Greenplum update system catalog = On 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:------------------------------------------------------ 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:- Filespace locations 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:------------------------------------------------------ 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-pg_system -> /data/master/gpseg-1 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-The packages on smdw1 are consistent. 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Adding standby master to catalog... 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Database catalog updated successfully. 20170427:15:37:51:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Updating pg_hba.conf file... 20170427:15:37:57:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-pg_hba.conf files updated successfully. 20170427:15:37:59:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Updating filespace flat files... 20170427:15:37:59:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Filespace flat file updated successfully. 20170427:15:37:59:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Starting standby master 20170427:15:37:59:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Checking if standby master is running on host: smdw1 in directory: /data/master/gpseg-1 20170427:15:38:01:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files... 20170427:15:38:06:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully. 20170427:15:38:06:012543 gpinitstandby:dwhm01_2_111:gpadmin-[INFO]:-Successfully created standby master on smdw1 20170427:15:38:06:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Successfully completed standby master initialization 20170427:15:38:07:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode 20170427:15:38:08:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -i -m -d /data/master/gpseg-1 20170427:15:38:08:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Gathering information and validating the environment... 20170427:15:38:08:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information 20170427:15:38:08:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Obtaining Segment details from master... 20170427:15:38:08:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.12.0 build 1' 20170427:15:38:08:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-There are 0 connections to the database 20170427:15:38:08:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='immediate' 20170427:15:38:08:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Master host=dwhm01_2_111 20170427:15:38:08:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=immediate 20170427:15:38:08:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Master segment instance directory=/data/master/gpseg-1 20170427:15:38:09:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process 20170427:15:38:09:012930 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Terminating processes for segment /data/master/gpseg-1 20170427:15:38:09:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /data/master/gpseg-1 20170427:15:38:09:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Gathering information and validating the environment... 20170427:15:38:09:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.12.0 build 1' 20170427:15:38:09:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150' 20170427:15:38:09:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Starting Master instance in admin mode 20170427:15:38:10:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information 20170427:15:38:10:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Obtaining Segment details from master... 20170427:15:38:10:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Setting new master era 20170427:15:38:10:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Master Started... 20170427:15:38:10:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Shutting down master 20170427:15:38:12:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait... .. 20170427:15:38:14:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Process results... 20170427:15:38:14:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:----------------------------------------------------- 20170427:15:38:14:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:- Successful segment starts = 4 20170427:15:38:14:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:- Failed segment starts = 0 20170427:15:38:14:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0 20170427:15:38:14:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:----------------------------------------------------- 20170427:15:38:14:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:- 20170427:15:38:14:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Successfully started 4 of 4 segment instances 20170427:15:38:14:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:----------------------------------------------------- 20170427:15:38:14:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Starting Master instance dwhm01_2_111 directory /data/master/gpseg-1 20170427:15:38:15:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Command pg_ctl reports Master dwhm01_2_111 instance active 20170427:15:38:15:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Starting standby master 20170427:15:38:15:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Checking if standby master is running on host: smdw1 in directory: /data/master/gpseg-1 20170427:15:38:18:013017 gpstart:dwhm01_2_111:gpadmin-[INFO]:-Database successfully started 20170427:15:38:18:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode 20170427:15:38:18:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Loading gp_toolkit... 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Scanning utility log file for any warning messages 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[WARN]:-******************************************************* 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[WARN]:-were generated during the array creation 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Please review contents of log file 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-/home/gpadmin/gpAdminLogs/gpinitsystem_20170427.log 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-To determine level of criticality 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-These messages could be from a previous run of the utility 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-that was called today! 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[WARN]:-******************************************************* 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Greenplum Database instance successfully created 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:------------------------------------------------------- 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-To complete the environment configuration, please 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-update gpadmin .bashrc file with the following 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/data/master/gpseg-1" 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:- to access the Greenplum scripts for this instance: 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:- or, use -d /data/master/gpseg-1 option for the Greenplum scripts 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:- Example gpstate -d /data/master/gpseg-1 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20170427.log 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Standby Master smdw1 has been configured 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-To activate the Standby Master Segment in the event of Master 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-failure review options for gpactivatestandby 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:------------------------------------------------------- 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-The Master /data/master/gpseg-1/pg_hba.conf post gpinitsystem 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-has been configured to allow all hosts within this new 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-new array must be explicitly added to this file 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:-located in the /usr/local/greenplum-db/docs directory 20170427:15:38:20:062124 gpinitsystem:dwhm01_2_111:gpadmin-[INFO]:------------------------------------------------------- [gpadmin@dwhm01_2_111 greenplum_file]$ |
安装完登陆:
[gpadmin@dwhm01_2_111 greenplum_file]$ psql -d postgres psql (8.2.15) Type "help" for help.
postgres=# help You are using psql, the command-line interface to PostgreSQL. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit postgres=# |
12.3 查看green进程
查看进程命令:greenplum_file]$gpssh -f host_file -e "ps -eaf|grep green"
[gpadmin@dwhm01_2_111 greenplum_file]$ gpssh -f host_file -e "ps -eaf|grep green" [ mdw1] ps -eaf|grep green [ mdw1] gpadmin 13079 1 0 15:38 ? 00:00:00 /data/greenplum-db-4.3.12.0/bin/postgres -D /data/master/gpseg-1 -p 5432 -b 1 -z 4 --silent-mode=true -i -M master -C -1 -x 6 -E [ mdw1] gpadmin 16989 14788 8 16:09 pts/0 00:00:00 python /data/greenplum-db/./bin/gpssh -f host_file -e ps -eaf|grep green [ mdw1] gpadmin 17071 17046 0 16:09 pts/30 00:00:00 grep green
[smdw1] ps -eaf|grep green [smdw1] gpadmin 9473 1 0 15:38 ? 00:00:00 /data/greenplum-db-4.3.12.0/bin/postgres -D /data/master/gpseg-1 -p 5432 -b 6 -z 4 --silent-mode=true -i -M master -C -1 -x 0 -y -E [smdw1] gpadmin 9745 9724 0 16:09 pts/1 00:00:00 grep green
[ sdw2] ps -eaf|grep green [ sdw2] gpadmin 11245 1 0 15:38 ? 00:00:00 /data/greenplum-db-4.3.12.0/bin/postgres -D /data/data1/primary/gpseg2 -p 40000 -b 4 -z 4 --silent-mode=true -i -M mirrorless -C 2 [ sdw2] gpadmin 11246 1 0 15:38 ? 00:00:00 /data/greenplum-db-4.3.12.0/bin/postgres -D /data/data1/primary/gpseg3 -p 40001 -b 5 -z 4 --silent-mode=true -i -M mirrorless -C 3 [ sdw2] gpadmin 11583 11561 0 16:09 pts/6 00:00:00 grep green
[ sdw1] ps -eaf|grep green [ sdw1] gpadmin 11250 1 0 15:38 ? 00:00:00 /data/greenplum-db-4.3.12.0/bin/postgres -D /data/data1/primary/gpseg1 -p 40001 -b 3 -z 4 --silent-mode=true -i -M mirrorless -C 1 [ sdw1] gpadmin 11251 1 0 15:38 ? 00:00:00 /data/greenplum-db-4.3.12.0/bin/postgres -D /data/data1/primary/gpseg0 -p 40000 -b 2 -z 4 --silent-mode=true -i -M mirrorless -C 0 [ sdw1] gpadmin 11589 11568 0 16:09 pts/3 00:00:00 grep green [gpadmin@dwhm01_2_111 greenplum_file]$ |
13 安装Performance Monitor平台
13.1安装PerformanceMonitor数据收集Agent
(1)安装命令:gpperfmon_install --enable --password gpmon_ckys0718--port 5432
gpmon_ckys0718是密码
[gpadmin@dwhm01_2_111 greenplum_file]$ gpperfmon_install --enable --password gpmon_ckys0718 --port 5432 20170427:16:17:39:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/lib/gpperfmon/gpperfmon3.sql template1 >& /dev/null
20170427:16:17:48:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/lib/gpperfmon/gpperfmon4.sql gpperfmon >& /dev/null 20170427:16:17:48:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/lib/gpperfmon/gpperfmon41.sql gpperfmon >& /dev/null 20170427:16:17:50:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/lib/gpperfmon/gpperfmon42.sql gpperfmon >& /dev/null 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/lib/gpperfmon/gpperfmonC.sql template1 >& /dev/null 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 psql template1 -c "DROP ROLE IF EXISTS gpmon" >& /dev/null 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 psql template1 -c "CREATE ROLE gpmon WITH SUPERUSER CREATEDB LOGIN ENCRYPTED PASSWORD 'gpmon_ckys0718'" >& /dev/null 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-echo "local gpperfmon gpmon md5" >> /data/master/gpseg-1/pg_hba.conf 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-echo "host all gpmon 127.0.0.1/28 md5" >> /data/master/gpseg-1/pg_hba.conf 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-touch /home/gpadmin/.pgpass >& /dev/null 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-mv -f /home/gpadmin/.pgpass /home/gpadmin/.pgpass.1493281059 >& /dev/null 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-echo "*:5432:gpperfmon:gpmon:gpmon_ckys0718" >> /home/gpadmin/.pgpass 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-cat /home/gpadmin/.pgpass.1493281059 >> /home/gpadmin/.pgpass 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-chmod 0600 /home/gpadmin/.pgpass >& /dev/null 20170427:16:17:52:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gp_enable_gpperfmon -v on >& /dev/null 20170427:16:17:58:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gpperfmon_port -v 8888 >& /dev/null 20170427:16:18:03:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gp_external_enable_exec -v on --masteronly >& /dev/null 20170427:16:18:09:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gpperfmon_log_alert_level -v warning >& /dev/null 20170427:16:18:14:017395 gpperfmon_install:dwhm01_2_111:gpadmin-[INFO]:-gpperfmon will be enabled after a full restart of GPDB [gpadmin@dwhm01_2_111 greenplum_file]$ |
(2)重启GPDB:
命令如下:
gpstop -r |
重启过程:
[gpadmin@dwhm01_2_111 greenplum_file]$ gpstop -r 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Starting gpstop with args: -r 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Gathering information and validating the environment... 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Obtaining Segment details from master... 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.12.0 build 1' 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:--------------------------------------------- 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Master instance parameters 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:--------------------------------------------- 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- Master Greenplum instance process active PID = 13079 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- Database = template1 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- Master port = 5432 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- Master directory = /data/master/gpseg-1 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- Shutdown mode = smart 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- Timeout = 120 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- Shutdown Master standby host = On 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:--------------------------------------------- 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Segment instances that will be shutdown: 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:--------------------------------------------- 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- Host Datadir Port Status 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- dwhs02_2_113 /data/data1/primary/gpseg0 40000 u 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- dwhs02_2_113 /data/data1/primary/gpseg1 40001 u 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- dwhs03_2_114 /data/data1/primary/gpseg2 40000 u 20170427:16:20:05:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- dwhs03_2_114 /data/data1/primary/gpseg3 40001 u
Continue with Greenplum instance shutdown Yy|Nn (default=N): > y 20170427:16:20:09:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-There are 0 connections to the database 20170427:16:20:09:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart' 20170427:16:20:09:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Master host=dwhm01_2_111 20170427:16:20:09:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart 20170427:16:20:09:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Master segment instance directory=/data/master/gpseg-1 20170427:16:20:11:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process 20170427:16:20:11:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Terminating processes for segment /data/master/gpseg-1 20170427:16:20:11:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Stopping master standby host smdw1 mode=fast 20170427:16:20:12:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Successfully shutdown standby process on smdw1 20170427:16:20:12:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Commencing parallel segment instance shutdown, please wait... 20170427:16:20:12:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-0.00% of jobs completed 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-100.00% of jobs completed 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:----------------------------------------------------- 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- Segments stopped successfully = 4 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:- Segments with errors during stop = 0 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:----------------------------------------------------- 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Successfully shutdown 4 of 4 segment instances 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Database successfully shutdown with no errors reported 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Cleaning up leftover gpmmon process 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-No leftover gpmmon process found 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Cleaning up leftover gpsmon processes 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-No leftover gpsmon processes on some hosts. not attempting forceful termination on these hosts 20170427:16:20:22:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Cleaning up leftover shared memory 20170427:16:20:27:018126 gpstop:dwhm01_2_111:gpadmin-[INFO]:-Restarting System... [gpadmin@dwhm01_2_111 greenplum_file]$ |
(3)查看mmon进程
[gpadmin@dwhm01_2_111 greenplum_file]$ ps -ef |grep gpmmon gpadmin 18306 18297 0 16:20 ? 00:00:00 /data/greenplum-db-4.3.12.0/bin/gpmmon -D /data/master/gpseg-1/gpperfmon/conf/gpperfmon.conf -p 5432 gpadmin 18624 14788 0 16:21 pts/0 00:00:00 grep gpmmon [gpadmin@dwhm01_2_111 greenplum_file]$ |
(4)查看监控数据是否写入
查看命令:psql -d gpperfmon -c 'select * fromsystem_now;'
[gpadmin@dwhm01_2_111 greenplum_file]$ psql -d gpperfmon -c 'select * from system_now;' ctime | hostname | mem_total | mem_used | mem_actual_used | mem_actual_free | swap_total | swap_used | swap_page_in | swap_page_out | cpu_user | cpu_sys | cpu_idle | loa d0 | load1 | load2 | quantum | disk_ro_rate | disk_wo_rate | disk_rb_rate | disk_wb_rate | net_rp_rate | net_wp_rate | net_rb_rate | net_wb_rate ---------------------+--------------+-------------+------------+-----------------+-----------------+------------+-----------+--------------+---------------+----------+---------+----------+---- ---+-------+-------+---------+--------------+--------------+--------------+--------------+-------------+-------------+-------------+------------- 2017-04-27 16:23:00 | dwhm01_2_111 | 29561516032 | 2471997440 | 447844352 | 29113671680 | 0 | 0 | 0 | 0 | 0.03 | 0.03 | 99.92 | 0. 01 | 0.02 | 0 | 15 | 0 | 1 | 0 | 10060 | 10 | 10 | 3410 | 1903 2017-04-27 16:23:00 | dwhs01_2_112 | 7321321472 | 1101586432 | 198516736 | 7122804736 | 0 | 0 | 0 | 0 | 0 | 0.07 | 99.93 | 0 | 0 | 0 | 15 | 0 | 1 | 0 | 816 | 1 | 1 | 39 | 129 2017-04-27 16:23:00 | dwhs02_2_113 | 7321321472 | 1391218688 | 200634368 | 7120687104 | 0 | 0 | 0 | 0 | 0.1 | 0 | 99.9 | 0 | 0.01 | 0 | 15 | 0 | 0 | 0 | 0 | 5 | 6 | 2225 | 1597 2017-04-27 16:23:00 | dwhs03_2_114 | 7321321472 | 1393754112 | 202346496 | 7118974976 | 0 | 0 | 0 | 0 | 0.07 | 0.03 | 99.9 | 0 | 0.03 | 0 | 15 | 0 | 1 | 0 | 272 | 5 | 6 | 2222 | 1594 (4 rows)
[gpadmin@dwhm01_2_111 greenplum_file]$ |
(5)拷贝Master主机拷贝配置文件到Standby Master的相应目录
su - gpadmin gpscp -h smdw $MASTER_DATA_DIRECTORY/pg_hba.conf =:$MASTER_DATA_DIRECTORY/ gpscp -h smdw ~/.pgpass =:~/
|
13.2 安装greenplum-cc-web
旧的版本是greenplum-pmon-web,这个因为版本太老了,基本没有人用了,后来改版了名字也改了改成greenplum-cc-web(PS:开源就这点不好,升级版本老是改软件名字,不经常关注的人就容易一直去找旧的还找不到下载地址,唉……),新的greenplum-cc-web-3.2.0-LINUX-x86_64.zip,下载地址:
https://network.pivotal.io/products/pivotal-gpdb#/releases/4540/file_groups/26 |
在master上安装greenplum-cc-web(以root用户安装):
unzip greenplum-cc-web-3.2.0-LINUX-x86_64.zip /bin/bash greenplum-cc-web-3.2.0-LINUX-x86_64.bin -y #安装过程中控制台提示一律yes |
添加path路径设置
vim ~/.bashrc source /usr/local/greenplum-cc-web/gpcc_path.sh |
赋予gpadmin用户权限:
chown -R gpadmin:gpadmin /usr/local/greenplum-cc-web-3.2.0 chown -R gpadmin:gpadmin /usr/local/greenplum-cc-web |
在其它非master上安装,命令gpccinstall -f hostfile_nm,执行过程:
(1)先准备一个非master的服务器列表 [root@dwhm01_2_111 greenplum_file]# more hostfile_nm smdw1 sdw1 sdw2 [root@dwhm01_2_111 greenplum_file]#
(2)使用gpccinstall进行安装 [root@dwhm01_2_111 greenplum_file]# gpccinstall -f hostfile_nm 20170428:13:33:22:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-Installation Info: link_name greenplum-cc-web binary_path /usr/local/greenplum-cc-web-3.2.0 binary_dir_location /usr/local binary_dir_name greenplum-cc-web-3.2.0 Stopping running gpcc instance ... Done. rm -f /usr/local/greenplum-cc-web-3.2.0.tar; rm -f /usr/local/greenplum-cc-web-3.2.0.tar.gz cd /usr/local; tar cf greenplum-cc-web-3.2.0.tar greenplum-cc-web-3.2.0 gzip /usr/local/greenplum-cc-web-3.2.0.tar 20170428:13:33:27:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (smdw1): mkdir -p /usr/local 20170428:13:33:27:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw1): mkdir -p /usr/local 20170428:13:33:27:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw2): mkdir -p /usr/local 20170428:13:33:27:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (smdw1): rm -rf /usr/local/greenplum-cc-web-3.2.0 20170428:13:33:27:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw1): rm -rf /usr/local/greenplum-cc-web-3.2.0 20170428:13:33:27:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw2): rm -rf /usr/local/greenplum-cc-web-3.2.0 20170428:13:33:27:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-scp software to remote location 20170428:13:33:30:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (smdw1): gzip -f -d /usr/local/greenplum-cc-web-3.2.0.tar.gz 20170428:13:33:30:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw1): gzip -f -d /usr/local/greenplum-cc-web-3.2.0.tar.gz 20170428:13:33:30:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw2): gzip -f -d /usr/local/greenplum-cc-web-3.2.0.tar.gz 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (smdw1): cd /usr/local; tar xf greenplum-cc-web-3.2.0.tar 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw1): cd /usr/local; tar xf greenplum-cc-web-3.2.0.tar 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw2): cd /usr/local; tar xf greenplum-cc-web-3.2.0.tar 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (smdw1): rm -f /usr/local/greenplum-cc-web-3.2.0.tar 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw1): rm -f /usr/local/greenplum-cc-web-3.2.0.tar 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw2): rm -f /usr/local/greenplum-cc-web-3.2.0.tar 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (smdw1): cd /usr/local; rm -f greenplum-cc-web; ln -fs greenplum-cc-web-3.2.0 greenplum-cc-web 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw1): cd /usr/local; rm -f greenplum-cc-web; ln -fs greenplum-cc-web-3.2.0 greenplum-cc-web 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-runPoolCommand on (sdw2): cd /usr/local; rm -f greenplum-cc-web; ln -fs greenplum-cc-web-3.2.0 greenplum-cc-web 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-Verifying installed software versions 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-remote command: . /usr/local/greenplum-cc-web/./gpcc_path.sh; /usr/local/greenplum-cc-web/./bin/gpcmdr --version 20170428:13:33:31:024961 gpccinstall:dwhm01_2_111:root-[INFO]:-remote command: . /usr/local/greenplum-cc-web-3.2.0/gpcc_path.sh; /usr/local/greenplum-cc-web-3.2.0/bin/gpcmdr --version ******************************************************************************** Greenplum Command Center is installed on - smdw1 - sdw1 - sdw2 ******************************************************************************** [root@dwhm01_2_111 greenplum_file]# |
13.3 设置the Command Center Console
(1)停止greenplumn库 su – gpadmin gpstop
(2)路径配置 vim ~/.bashrc source /usr/local/greenplum-db/greenplum_path.sh source /usr/local/greenplum-cc-web/gpcc_path.sh # source ~/.bashrc 生效
(3)启动gp数据库 gpstart
(4)GP数据库实例起来后,就可以开始执行gpcmdr--setup命令,开始设置Command Center Console,设置过程如下: [gpadmin@dwhm01_2_111 ~]$ gpcmdr --setup
The instance name identifies the GPDB cluster this Greenplum Command Center web UI monitors and controls. Instance names can contain letters, digits, and underscores and are not case sensitive.
Please enter the instance name gm^H^H^H^H ERROR: Instance name' has an illegal character
Please enter the instance name gpmon_ys
The display name is shown as the "server" in the web interface and does not need to be a hostname.Display names can contain letters, digits, and underscores and ARE case sensitive.
Please enter the display name for this instance:(Press ENTER to use instance name) yschina_db
A GPCC instance can be set to manage and monitor a remote Greenplum Database. Notice: remote mode will disable these functionalities: 1. Standby host for GPCC. 2. Workload Manager UI.
Is the master host for the Greenplum Database remote? Yy/Nn (default=N) N
What port does the Greenplum Database use? (default=5432)
Enable kerberos login for this instance? Yy/Nn (default=N) gpcc^H^H^H
Enable kerberos login for this instance? Yy/Nn (default=N) Y
Requirements for using Kerberos with GPCC:
1. RedHat Linux 5.10 or 6+ (Centos 5 and SLES are not supported) 2. /etc/krb5.conf file is the same as on the Kerberos server 3. Greenplum database must already be configured for Kerberos
Confirm webserver name, IP, or DNS from keytab file.
For example, if the HTTP principal in your keytab file is HTTP/gpcc.example.com@KRB.EXAMPLE.COM, enter "gpcc.example.com".
Enter webserver name for this instance: (default=dwhm01_2_111)
Enter the name of GPDB kerberos service name: (default=postgres) yschina
GPCC supports 3 different kerberos mode: 1. Normal mode: If keytab file provided contains the login user's key entry, GPCC will run queries as the login user. Otherwise, GPCC will run all queries as gpmon user. 2. Strict mode: If keytab file doesn't contain the login user's key entry, the user won't be able to login. 3. Gpmon Only mode: The keytab file can only contain service keys, no user's key entry is needed in keytab file. Only gpmon ticket need to be obtained in GPCC server machine before GPCC runs, and refresh before expiration.
Choose kerberos mode (1.normal/2.strict/3.gpmon_only): (default=1)
Enter path to the keytab file: /usr/local/greenplumn-cc-web/ ERROR: File /usr/local/greenplumn-cc-web/ does not exist or is not a correct file.
Enter path to the keytab file: /usr/local/greenplumn-cc-web/keytab ERROR: File /usr/local/greenplumn-cc-web/keytab does not exist or is not a correct file.
Enter path to the keytab file: /usr/local/greenplum-cc-web/keytab ERROR: File /usr/local/greenplum-cc-web/keytab does not exist or is not a correct file.
Enter path to the keytab file: /usr/local/greenplum-cc-web/keytab
Creating instance schema in GPDB. Please wait ... pq: no pg_hba.conf entry for host "::1", user "gpmon", database "gpperfmon", SSL off Remove instance gpmon_ys ... [gpadmin@dwhm01_2_111 ~]$
|
有报错信息,需要设置下,在pg_hba.conf里面添加配置
(1)添加一行配置 [gpadmin@dwhm01_2_111 ~]$ vim /data/master/gpseg-1/pg_hba.conf host gpperfmon gpmon ::1/128 md5
(2)重启gpdb
(3)再次安装 [gpadmin@dwhm01_2_111 ~]$ gpcmdr --setup
The instance name identifies the GPDB cluster this Greenplum Command Center web UI monitors and controls. Instance names can contain letters, digits, and underscores and are not case sensitive.
Please enter the instance name yschina_db ERROR: Instance 'yschina_db' already exists
Please enter the instance name yschina_db ERROR: Instance 'yschina_db' already exists
Please enter the instance name yschina_db^H^H ERROR: Instance name 'yschina_' has an illegal character
Please enter the instance name ysdb
The display name is shown as the "server" in the web interface and does not need to be a hostname.Display names can contain letters, digits, and underscores and ARE case sensitive.
Please enter the display name for this instance:(Press ENTER to use instance name) ysdb
A GPCC instance can be set to manage and monitor a remote Greenplum Database. Notice: remote mode will disable these functionalities: 1. Standby host for GPCC. 2. Workload Manager UI.
Is the master host for the Greenplum Database remote? Yy/Nn (default=N) n
What port does the Greenplum Database use? (default=5432)
Enable kerberos login for this instance? Yy/Nn (default=N) n
Creating instance schema in GPDB. Please wait ...
The Greenplum Command Center runs a small web server for the UI and web API. This web server by default runs on port 28080, but you may specify any available port.
What port would you like the new web server to use for this instance? (default=28080)
Users logging in to the Command Center must provide database user credentials. In order to protect user names and passwords, it is recommended that SSL be enabled.
Enable SSL for the Web API Yy/Nn (default=N) n
Copy the instance to a standby master host Yy/Nn (default=Y) y
What is the hostname of the standby master host? smdw1 standby is smdw1 Done writing webserver configuration to /usr/local/greenplum-cc-web/instances/ysdb/webserver/conf/app.conf Copying instance ysdb to host smdw1 ... =>Cleanup standby host's instance ysdb if any ... =>Copying the instance folder to standby host ... exit status 2 Remove instance ysdb ... [gpadmin@dwhm01_2_111 ~]$
(4)启动实例 [gpadmin@dwhm01_2_111 ~]$ gpcmdr --start Starting instance yschina_db ... Greenplum Command Center UI for instance 'yschina_db' - [RUNNING on PORT: 28080, pid 54242] [gpadmin@dwhm01_2_111 ~]$
(5)查看端口状态 [gpadmin@dwhm01_2_111 ~]$ lsof -i :28080 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME gpmonws 54242 gpadmin 4u IPv4 1569630 0t0 TCP *:28080 (LISTEN) [gpadmin@dwhm01_2_111 ~]$ |
13.4 访问console
访问地址是:http://192.168.13.111:28080/,用google浏览器打开的,qq浏览器、360浏览器打不开,不知道是为啥?打开的登陆界面如下:
C:\pic\greenplum\002.png
输入用户名密码,用户名是默认的gpmon,密码是在安装的时候指定的gpmon_ckys0718,登陆之后的界面显示如下:
C:\pic\greenplum\003.png
14,操作数据库
14.1 查看帮助文档
默认登陆的是postgres库,登陆进去后可以查看帮助文档
[gpadmin@dwhm01_2_111 greenplum_file]$ psql -d postgres psql (8.2.15) Type "help" for help.
postgres=# help You are using psql, the command-line interface to PostgreSQL. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit postgres=# \h Available help: ABORT ALTER TYPE CREATE INDEX DROP CAST DROP TRIGGER REVOKE ALTER AGGREGATE ALTER USER CREATE LANGUAGE DROP CONVERSION DROP TYPE ROLLBACK ALTER CONVERSION ALTER USER MAPPING CREATE OPERATOR DROP DATABASE DROP USER ROLLBACK PREPARED ALTER DATABASE ANALYZE CREATE OPERATOR CLASS DROP DOMAIN DROP USER MAPPING ROLLBACK TO SAVEPOINT ALTER DOMAIN BEGIN CREATE RESOURCE QUEUE DROP EXTERNAL TABLE DROP VIEW SAVEPOINT ALTER EXTERNAL TABLE CHECKPOINT CREATE ROLE DROP FILESPACE END SELECT ALTER FILESPACE CLOSE CREATE RULE DROP FOREIGN DATA WRAPPER EXECUTE SELECT INTO ALTER FOREIGN DATA WRAPPER CLUSTER CREATE SCHEMA DROP FUNCTION EXPLAIN SET ALTER FUNCTION COMMENT CREATE SEQUENCE DROP GROUP FETCH SET CONSTRAINTS ALTER GROUP COMMIT CREATE SERVER DROP INDEX GRANT SET ROLE ALTER INDEX COMMIT PREPARED CREATE TABLE DROP LANGUAGE INSERT SET SESSION AUTHORIZATION ALTER LANGUAGE COPY CREATE TABLE AS DROP OPERATOR LISTEN SET TRANSACTION ALTER OPERATOR CREATE AGGREGATE CREATE TABLESPACE DROP OPERATOR CLASS LOAD SHOW ALTER OPERATOR CLASS CREATE CAST CREATE TRIGGER DROP OWNED LOCK START TRANSACTION ALTER RESOURCE QUEUE CREATE CONSTRAINT TRIGGER CREATE TYPE DROP RESOURCE QUEUE MOVE TRUNCATE ALTER ROLE CREATE CONVERSION CREATE USER DROP ROLE NOTIFY UNLISTEN ALTER SCHEMA CREATE DATABASE CREATE USER MAPPING DROP RULE PREPARE UPDATE ALTER SEQUENCE CREATE DOMAIN CREATE VIEW DROP SCHEMA PREPARE TRANSACTION VACUUM ALTER SERVER CREATE EXTERNAL TABLE DEALLOCATE DROP SEQUENCE REASSIGN OWNED VALUES ALTER TABLE CREATE FOREIGN DATA WRAPPER DECLARE DROP SERVER REINDEX ALTER TABLESPACE CREATE FUNCTION DELETE DROP TABLE RELEASE SAVEPOINT ALTER TRIGGER CREATE GROUP DROP AGGREGATE DROP TABLESPACE RESET postgres=# |
14.2 创建数据库、表
(1)查看版本 postgres=# select version(); version ------------------------------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 8.2.15 (Greenplum Database 4.3.12.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Feb 27 2017 20:45:12 (1 row)
(2)创建数据库yschina_db postgres=# create database yschina_db; CREATE DATABASE postgres=# \q [gpadmin@dwhm01_2_111 greenplum_file]$ psql -d yschina_db psql (8.2.15) Type "help" for help.
(3)创建表zz01,并录入数据 yschina_db=# create table zz01(id int primary key,col1 varchar(50)); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "zz01_pkey" for table "zz01" CREATE TABLE yschina_db=# insert into zz01 select 1,'yschina'; INSERT 0 1 yschina_db=# select * from zz01; id | col1 ----+---------- 1 | yschina (1 row)
yschina_db=# |
14.3 用户管理
创建用户: yschina_db=# create role timdba login password 'tim_0923'; |
参考文章:http://gpdb.docs.pivotal.io/43120/install_guide/prep_os_install_gpdb.html#topic2
参考文档:http://gpdb.docs.pivotal.io/43120/install_guide/prep_os_install_gpdb.html#topic8
参考文档:http://www.cnblogs.com/dap570/p/greenplum_4node_install.html