部署greenplum集群详细教程

部署greenplum-db-4.3.16.1集群详细教程,照着部署,直接成功,不成功,那就是你不根据步骤来

1.准备工作

每台机器都需要

yum install -y sed
yum install -y tar
yum install -y perl

2.配置系统信息

2.1 配置系统信息,做安装Greenplum的准备工作

Greenplum 数据库版本4.3.16.1

Linux系统版本 CentOS7.9 64bit

2.1.1 Greenplum集群介绍

这里使用1个master,2个segment的集群,ip为:
192.168.1.171  mdw
192.168.1.172  smdw
192.168.1.173  sdw1
192.168.1.174  sdw2
其中1192.168.1.171为master,192.168.1.172为备用节点,其余为segment。

2.1.2 修改/etc/hosts文件(所有的机器都要修改)

这里主要是为之后Greenplum能够在各个节点之间相互通信做准备,添加如下:
格式为:主机ip地址 主机名

[root@mdw ~]# cat /etc/hosts
192.168.1.171  mdw
192.168.1.172  smdw
192.168.1.173  sdw1
192.168.1.174  sdw2

注意:一定要按照这个格式写,参数代表的含义,上面已经做出说明。
配置了这个文件之后,需要继续修改hostname,即修改/etc/sysconfig/network这个文件
如下(所有机器都要修改):

[root@mdw ~]# vi /etc/sysconfig/network
# Created by anaconda

NETWORKING=yes
HOSTNAME=mdw

注意:这里修改hostname,需要重启之后方可生效,如需立即生效,需使用hostname命令。
[root@mdw ~]# hostname
这里的HOSTNAME一定要与/etc/hosts中的主机名一致,最终可以使用ping命令测试是否配置好了:

ping不通,先检查一下,无需进行下一步操作

[root@mdw ~]# ping smdw
[root@mdw ~]# ping sdw1
[root@mdw ~]# ping sdw2

 一定要使用主机名来测试,使用ip地址测试无效。

2.1.3 创建用户和用户组(所有机器都要创建)

创建gpamdin用户及gpadmin用户组

groupadd gpadmin
useradd gpadmin -r -m -g gpadmin
passwd gpadmin
#此处密码自行输入

为gpadmin用户授予sudo访问权限
执行visudo

%wheel        ALL=(ALL)       NOPASSWD: ALL

分配gpadmin到此组

usermod -aG wheel gpadmin

2.1.4 修改系统内核(所有的机器都要修改)
[root@mdw ]# cat /etc/sysc
sysconfig/   sysctl.conf  sysctl.d/
[gpadmin@mdw gpseg-1]$ cat /etc/sysctl.conf
kernel.shmall = _PHYS_PAGES / 2 ###根据实际情况修改
kernel.shmmax = kernel.shmall * PAGE_SIZE ###根据实际情况修改
kernel.shmmni = 4096
vm.overcommit_memory = 2
vm.overcommit_ratio = 95

net.ipv4.ip_local_port_range = 10000 65535
kernel.sem = 500 2048000 200 4096
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.swappiness = 10
vm.zone_reclaim_mode = 0
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100

###对于内存大于64G的机器,增加下面四个配置
#vm.dirty_background_ratio = 0
#vm.dirty_ratio = 0
#vm.dirty_background_bytes = 1610612736
#vm.dirty_bytes = 4294967296
####################################
####对于内存小于64G的机器,增加下面两个配置
vm.dirty_background_ratio = 3
vm.dirty_ratio = 10
####################################
vm.min_free_kbytes = 549877

执行以下命令使参数生效:

[root@mdw ~]# sysctl -p

注意:每台机器都要修改,不然就会初始化不成功。

2.1.5 修改文件打开限制(每台机器都要修改):

在这个文件末尾,删掉原来的,添加上面四行就可以了。

[root@mdw ~]# vi /etc/security/limits.conf
* soft nofile 524288
* hard nofile 524288
* soft nproc 131072
* hard nproc 131072
2.1.6 关闭防火墙(每台机器都要)
service iptables stop

关闭SELINUX

sudo sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
sudo sed -i 's/^SELINUX=permissive/SELINUX=disabled/' /etc/selinux/config

执行完上面的命令,输入sestatus
[root@mdw ~]# sestatus
SELinux status:                 disabled

看见后面的disables,证明已经永久关闭了

2.1.7  执行下面两条命令以修改磁盘IO调度设置和禁用THP提升性能
grubby --update-kernel=ALL --args="elevator=deadline"
grubby --update-kernel=ALL --args="transparent_hugepage=never"

执行完成后,重启一下系统以生效

3 安装Greenplum

3.1 下载安装包

官网 https://network.pivotal.io/products/pivotal-gpdb
注意:下载安装包之前,需要在官网注册账号,方可下载。

3.2 master上安装Greenplum(只需master即可)
[root@mdw ~]# cd  /usr/local/
[root@mdw local]# ll greenplum-db-4.3.16.1.zip
-rw-r--r--. 1 gpadmin gpadmin 140205947 8月  19 10:41 greenplum-db-4.3.16.1.zip

解压greenplum-db-4.3.16.1.zip

[root@mdw local]# unzip greenplum-db-4.3.16.1.zip
[root@mdw local]# ll greenplum-db-4.3.16.1-rhel5-x86_64.bin
-rwxr-xr-x. 1 gpadmin gpadmin 142515526 8月  15 2017 greenplum-db-4.3.16.1-rhel5-x86_64.bin

解压后,会产生greenplum-db-4.3.16.1-rhel5-x86_64.bin文件

在root用户下,将下载的文件放在CentOS系统中自己能找到的位置即可,给该文件赋予可执行权限,之后执行该文件,即开始安装

[root@hmdw loacl]# chmod +x greenplum-db-5.24.2-rhel7-x86_64.bin
[root@hmdw loacl]# ./greenplum-db-5.24.2-rhel7-x86_64.bin

期间需要修改默认安装目录,输入 /usr/local之后即可安装成功(yes 后面的第二次提示),此时master上的Greenplum安装成功了。
安装完成后的提示:

********************************************************************************
Installation complete.
Greenplum Database is installed in /usr/local/greenplum-db-4.3.16.1

Pivotal Greenplum documentation is available
for download at http://gpdb.docs.pivotal.io
********************************************************************************

 之前我们都是以root身份安装的,所以要将安装目录下的文件的所有者,都修改为gpadmin。

chown -R gpadmin:gpadmin /usr/local/greenplum-db-4.3.16.1

因为只在master上安装了Greenplum,所以接下来要将安装包批量发送到每个segment上,才能算是整个集群完整安装了Greenplum。
下面的操作都是为了连接所有节点,并将安装包发送到每个节点

3.3 创建配置文件
[root@mdw greenplum]# su - gpadmin
[gpadmin@mdw ~]$ ll
total 0
[gpadmin@mdw ~]$ mkdir conf
[gpadmin@mdw ~]$ ll
total 4
drwxrwxr-x 2 gpadmin gpadmin 4096 Mar 13 22:57 conf
[gpadmin@mdw ~]$ cd conf
[gpadmin@mdw conf]$ vi hostlist
mdw
smdw
sdw1
sdw2

[gpadmin@mdw conf]$ vi standby_seg_hosts
smdw
sdw1
sdw2

[gpadmin@mdw conf]$ vi seg_hosts
sdw1
sdw2

[gpadmin@mdw conf]$ ll
total 8
-rw-rw-r-- 1 gpadmin gpadmin 30 Mar 13 22:58 hostlist
-rw-rw-r-- 1 gpadmin gpadmin 30 Mar 13 23:05 seg_hosts
-rw-rw-r-- 1 gpadmin gpadmin 30 Mar 13 23:05 standby_seg_hosts

注意:此时需要转换成gpadmin身份来操作了,按照上面的文件内容创建hostlist和seg_hosts文件备用。
hostlist存储了所有节点的主机名,seg_hosts存储了所有从节点的主机名。
这里文件中的mdw、smdw、sdw1、sdw2
即为之前在 /etc/hosts文件中配置的最后一个参数。

3.5 打通所有节点

greenplum_path.sh中保存了运行Greenplum的一些环境变量设置,包括GPHOOME、PYTHONHOME等设置。

 如果报错,提示没有greenplum-db这个目录,就执行

ln -s /usr/local/greenplum-db-4.3.16.1 /usr/local/greenplum/greenplum-db
[gpadmin@mdw ~]$ source /usr/local/greenplum-db/greenplum_path.sh
[gpadmin@mdw ~]$ gpssh-exkeys -f /home/gpadmin/conf/hostlist
[STEP 1 of 5] create local ID and authorize on local host
  ... /home/gpadmin/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts
  ... send to smdw
  ***
  *** Enter password for smdw: 
  ... send to sdw1
  ... send to sdw2

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
  ... finished key exchange with smdw
  ... finished key exchange with sdw1
  ... finished key exchange with sdw2

[INFO] completed successfully

注意:
(1)首次执行gpssh-exkeys命令时,在[STEP 3 of 5],要输入每个segment节点的gpadmin用户的密码。
(2)gpssh-exkeys命令使用的时候一定要用gpadmin身份,因为这个命令会生成ssh的免密码登录的秘钥,在/home/gpadmin/.ssh这里。如果以root身份使用gpssh-exkeys命令,那么生成的.ssh秘钥在root的home下面或者是在/home/gpadmin下面但是是root的所有者,如果之后使用gpadmin身份执行相应操作的时候就没有权限。

(3)如果执行gpssh-exkeys免密登录报错,那就需要手动来进行免密登录的操作了。具体可以百度一下,方法都差不多
[INFO] completed successfully 这就说明成功打通了,之后就可以使用下面的命令开启批量操作,如下:

[gpadmin@mdw ~]$ gpssh -f /home/gpadmin/conf/hostlist
=> pwd
[mdw] /home/gpadmin
[smdw] /home/gpadmin
[sdw1] /home/gpadmin
[sdw2] /home/gpadmin
=> exit

这里pwd命令是linux中的查看路径命令,这里也是查看一下批量操作时所处的位置,从中可以看到同时连接到了3个节点。这里如果/etc/hosts文件中参数只设置了两个,没有设置主机名,就只能同时连接2个节点,而且是随机的。
这里我们只是测试一下,exit之后先做一些其他的操作。

3.5 将安装包分发到每个子节点

打通之后需要将master中的greenplum安装包批量复制到各个segment节点上。
打包:

[gpadmin@mdw ~]$ cd /usr/local
[gpadmin@mdw local]$ tar -cf greenplum-db.tar.gz ./greenplum-db-4.3.16.1
[gpadmin@mdw local]$ ll
total 1238124
-rw-r--r--.  1 gpadmin gpadmin 140205947 8月  19 10:41 greenplum-db-4.3.16.1.zip

然后利用gpscp命令将这个文件复制到每一台机器上:

[gpadmin@mdw local]$ gpscp -f /home/gpadmin/conf/seg_hosts greenplum-db.tar.gz =:/usr/loacl

如果没有意外,就批量复制成功了,可以去子节点的相应文件夹查看,之后要将tar包解压,我们使用批量操作。

[gpadmin@mdw local]$ cd ~/conf/
[gpadmin@mdw conf]$ gpssh -f seg_hosts
=> cd /opt/greenplum
[smdw]
[sdw1]
[sdw2]
=> tar -xf greenplum-db.tar.gz
[smdw]
[sdw1]
[sdw2]
#建立软链接
=> ln -s greenplum-db-5.24.2 greenplum-db
[smdw]
[sdw1]
[sdw2]
#(可以使用ll查看一下是否已经安装成功)
=> exit

这样就完成了所有节点的安装。

4. 初始化数据库

在初始化之前的的几个步骤都是做一些准备工作。

4.1 批量创建Greenplum数据存放目录
[gpadmin@mdw conf]$ gpssh -f hostlist
=> cd /opt
=> mkdir gpdata
[mdw]
[smdw]
[sdw1]
[sdw2]
=> cd gpdata
[mdw]
[smdw]
[sdw1]
[sdw2]
=> mkdir gpmaster gpdatap1 gpdatap2 gpdatam1 gpdatam2
[mdw]
[smdw]
[sdw1]
[sdw2]
=> exit
4.2 配置.bash_profile环境变量,放在最后(每台机器都要)
[gpadmin@mdw conf]$ cd
[gpadmin@mdw ~]$ vi .bash_profile
source /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/home/gpadmin/gpdata/gpmaster/gpseg-1
export PGPORT=5432
#export PGDATABASE=testDB
export PGDATABASE=postgres

注意:PGPORT指的是你安装greenplum数据库的端口号。

4.3 创建初始化配置文件(只需master即可)
[gpadmin@mdw ~]$ vi /home/gpadmin/conf/gpinitsystem_config
# FILE NAME: gpinitsystem_config

# Configuration file needed by the gpinitsystem

################################################
#### REQUIRED PARAMETERS
################################################

#### Name of this Greenplum system enclosed in quotes.
ARRAY_NAME="Greenplum"

#### Naming convention for utility-generated data directories.
SEG_PREFIX=gpseg

#### Base number by which primary segment port numbers
#### are calculated.
PORT_BASE=40000

#### File system location(s) where primary segment data directories
#### will be created. The number of locations in the list dictate
#### the number of primary segments that will get created per
#### physical host (if multiple addresses for a host are listed in
#### the hostfile, the number of segments will be spread evenly across
#### the specified interface addresses).
declare -a DATA_DIRECTORY=(/opt/gpdata/gpdatap1 /opt/gpdata/gpdatap2)

#### OS-configured hostname or IP address of the master host.
MASTER_HOSTNAME=mdw

#### File system location where the master data directory
#### will be created.
MASTER_DIRECTORY=/opt/gpdata/gpmaster

#### Port number for the master instance.
MASTER_PORT=5432

#### Shell utility used to connect to remote hosts.
TRUSTED_SHELL=ssh

#### Maximum log file segments between automatic WAL checkpoints.
CHECK_POINT_SEGMENTS=8

#### Default server-side character set encoding.
ENCODING=UNICODE

################################################
#### OPTIONAL MIRROR PARAMETERS
################################################

#### Base number by which mirror segment port numbers
#### are calculated.
MIRROR_PORT_BASE=43000

#### Base number by which primary file replication port
#### numbers are calculated.
REPLICATION_PORT_BASE=41000

#### Base number by which mirror file replication port
#### numbers are calculated.
MIRROR_REPLICATION_PORT_BASE=51000

#### File system location(s) where mirror segment data directories
#### will be created. The number of mirror locations must equal the
#### number of primary locations as specified in the
#### DATA_DIRECTORY parameter.
declare -a MIRROR_DATA_DIRECTORY=(/opt/gpdata/gpdatam1 /opt/gpdata/gpdatam2)


################################################
#### OTHER OPTIONAL PARAMETERS
################################################

#### Create a database of this name after initialization.
#DATABASE_NAME=postgres(这一行要注释掉,不然初始化时会失败)

#### Specify the location of the host address file here instead of
#### with the the -h option of gpinitsystem.
MACHINE_LIST_FILE=/home/gpadmin/conf/standby_seg_hosts
4.4 初始化数据库(只需master即可)
[gpadmin@mdw ~]$ cd conf
[gpadmin@mdw conf]$ gpinitsystem -c gpinitsystem_config -h standby_seg_hosts

注意:如果上面有一些配置有问题,gpinitsystem就不能成功,日志在主节点/home/gpadmin/gpAdminLogs/的gpinitsystem_2020XXXX.log文件中。
需要注意的是如果初始化失败,一定要认真查看这个日志文件,一味重复安装没有太大意义,重要的是要找到主要原因。

5. 测试运行安装的Greenplum数据库

5.1 启动和停止数据库测试是否能正常启动和关闭
[gpadmin@mdw ~]$ gpstart
[gpadmin@mdw ~]$ gpstop
5.2 访问数据库
[gpadmin@mdw gpseg-1]$ psql
psql (8.2.15)
Type "help" for help.
postgres=# 

出现以上界面,恭喜你已经安装成功了。
 

6. 增加备用节点

[gpadmin@mdw gpseg-1]$ gpinitstandby -s smdw
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Checking for filespace directory /opt/gpdata/gpmaster/gpseg-1 on smdw
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master initialization parameters
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master hostname               = mdw
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master data directory         = /opt/gpdata/gpmaster/gpseg-1
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master port                   = 5432
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master hostname       = smdw
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master port           = 5432
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master data directory = /opt/gpdata/gpmaster/gpseg-1
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum update system catalog         = On
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:- Filespace locations
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
20240821:10:33:31:012675 gpinitstandby:mdw:gpadmin-[INFO]:-pg_system -> /opt/gpdata/gpmaster/gpseg-1
Do you want to continue with standby master initialization? Yy|Nn (default=N):
> y
20240821:10:33:33:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
20240821:10:33:33:012675 gpinitstandby:mdw:gpadmin-[INFO]:-The packages on smdw are consistent.
20240821:10:33:33:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Adding standby master to catalog...
20240821:10:33:33:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Database catalog updated successfully.
20240821:10:33:34:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Updating pg_hba.conf file...
20240821:10:33:40:012675 gpinitstandby:mdw:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
20240821:10:33:41:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Updating filespace flat files...
20240821:10:33:41:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Filespace flat file updated successfully.
20240821:10:33:41:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Starting standby master
20240821:10:33:41:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Checking if standby master is running on host: smdw  in directory: /opt/gpdata/gpmaster/gpseg-1
20240821:10:33:43:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
20240821:10:33:49:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
20240821:10:33:49:012675 gpinitstandby:mdw:gpadmin-[INFO]:-Successfully created standby master on smdw

gpstate -f 查询是否成功


[gpadmin@mdw gpseg-1]$ gpstate -f
20240821:10:34:03:012803 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -f
20240821:10:34:03:012803 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.16.1 build 1'
20240821:10:34:03:012803 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.16.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Aug 14 2017 22:20:16'
20240821:10:34:03:012803 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:-Standby master details
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:-----------------------
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:-   Standby address          = smdw
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:-   Standby data directory   = /opt/gpdata/gpmaster/gpseg-1
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:-   Standby port             = 5432
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:-   Standby PID              = 19046
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:-   Standby status           = Standby host passive
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:--pg_stat_replication
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:--WAL Sender State: streaming
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:--Sync state: sync
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:--Sent Location: 0/C000000
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:--Flush Location: 0/C000000
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:--Replay Location: 0/C000000
20240821:10:34:04:012803 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
[gpadmin@mdw gpseg-1]$ gpstate -m
20240821:10:34:26:012842 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -m
20240821:10:34:26:012842 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.16.1 build 1'
20240821:10:34:26:012842 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.16.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Aug 14 2017 22:20:16'
20240821:10:34:26:012842 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:--Current GPDB mirror list and status
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:--Type = Group
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:-   Mirror   Datadir                       Port    Status    Data Status
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:-   sdw2     /opt/gpdata/gpdatam1/gpseg0   43000   Passive   Synchronized
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:-   sdw2     /opt/gpdata/gpdatam2/gpseg1   43001   Passive   Synchronized
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:-   smdw     /opt/gpdata/gpdatam1/gpseg2   43000   Passive   Synchronized
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:-   smdw     /opt/gpdata/gpdatam2/gpseg3   43001   Passive   Synchronized
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:-   sdw1     /opt/gpdata/gpdatam1/gpseg4   43000   Passive   Synchronized
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:-   sdw1     /opt/gpdata/gpdatam2/gpseg5   43001   Passive   Synchronized
20240821:10:34:27:012842 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------

到这里,基本上完成了,接下来就是测试,是否可以连接数据库,及创建用户和密码了,进行测试了

7.连接数据库

1.设置密码

[gpadmin@mdw ~]$ psql -U  gpadmin -d postgres
psql (8.2.15)
Type "help" for help.
postgres=# ALTER USER gpadmin with password 'gpadmin';
ALTER ROLE
postgres=# \q

地址:192.168.1.191
端口:5432
初始数据库:postgres
用户:gpadmin
密码:gpadmin

参考链接:

常用命令

常用命令

gpstate -e #查看mirror的状态
gpstate -f #查看standby master的状态
gpstate -s #查看整个GP群集的状态
gpstate -i #查看GP的版本
gpstate --help #帮助文档,可以查看gpstate更多用法

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值