Greenplum实战--GreenPlum数据库Master节点搭建Standby【转】

GreenPlum数据库对于数据的冗余,也支持类似Oracle数据库物理DataGuard的数据冗余机制,Master的数据库镜像称为Standby,Segment节点的数据库镜像称之为Mirror,本文主要介绍如何为没有做Standby的Master节点添加Standby。

需要注意,在为Master节点搭建Standby的过程中,GreenPlum会自动关闭数据库,并以utility模式打开Master节点,然后修改gp_segment_configuration字典中增加Standby的信息,然后再关闭Master节点,将Master的数据拷贝到Standby节点,最后启动数据库,所以,在为Master节点添加Standby时,需要在空闲时段进行,否则会影响业务。

在搭建Standby的时候,Standby主机需要先安装GreenPlum数据库软件,这和正常安装GreenPlum数据库软件没有区别,从规范化考虑,尽量使用和Master主机同样的安装路径,下面简单结束安装过程。

1.创建gpadmin用户及安装目录

01[root@mdw-std ~]# groupadd -g 520 gpadmin
02[root@mdw-std ~]# useradd -u 520 -g gpadmin gpadmin
03[root@mdw-std ~]# passwd gpadmin
04Changing password for user gpadmin.
05New password:
06BAD PASSWORD: it is based on a dictionary word
07BAD PASSWORD: is too simple
08Retype new password:
09passwd: all authentication tokens updated successfully.
10[root@mdw-std ~]# mkdir -p /gpdb/app
11[root@mdw-std ~]# mkdir -p /gpdb/gpdata
12[root@mdw-std ~]# chown -R gpadmin:gpadmin /gpdb/

2.配置hosts文件,添加所有主机的信息

1[gpadmin@mdw config]$ vi /etc/hosts
210.9.15.20      mdw
310.9.15.22      mdw-std
410.9.15.24      sdw1
510.9.15.26      sdw2
610.9.15.28      sdw3

除Standby主机外,其他所有主机(Master节点和所有Segment节点)的hosts文件均需要添加Standby主机的信息,也就是所有GreenPlum数据库包含的主机的hosts文件都需要包含以上hosts文件中的所有主机信息。

3.上传安装文件及解压

01[gpadmin@mdw-std gpdb]$ scp 10.9.15.20:/gpdb/*.zip .
02The authenticity of host '10.9.15.20 (10.9.15.20)' can't be established.
03RSA key fingerprint is 61:72:68:57:16:28:40:d4:bc:9e:68:f0:bc:ac:65:e9.
04Are you sure you want to continue connecting (yes/no)? yes
05Warning: Permanently added '10.9.15.20' (RSA) to the list of known hosts.
06gpadmin@10.9.15.20's password:
07greenplum-db-4.3.6.2-build-1-RHEL5-x86_64.zip                        100%  134MB 133.7MB/s   00:01
08[gpadmin@mdw-std gpdb]$ unzip greenplum-db-4.3.6.2-build-1-RHEL5-x86_64.zip
09Archive:  greenplum-db-4.3.6.2-build-1-RHEL5-x86_64.zip
10  inflating: README_INSTALL         
11  inflating: greenplum-db-4.3.6.2-build-1-RHEL5-x86_64.bin 

4.配置内核参数,添加以下内容

01[root@mdw-std ~]# vi /etc/sysctl.conf
02 
03kernel.shmmax = 500000000
04kernel.shmall = 4000000000
05kernel.shmmni = 4096
06kernel.sem = 250 512000 100 2048
07kernel.sysrq = 1
08kernel.core_uses_pid = 1
09kernel.msgmnb = 65536
10kernel.msgmax = 65536
11kernel.msgmni = 2048
12net.ipv4.tcp_syncookies = 1
13net.ipv4.ip_forward = 0
14net.ipv4.conf.default.accept_source_route = 0
15net.ipv4.tcp_tw_recycle = 1
16net.ipv4.tcp_max_syn_backlog = 4096
17net.ipv4.conf.all.arp_filter = 1
18net.ipv4.conf.default.arp_filter = 1
19net.ipv4.ip_local_port_range = 1025 65535
20net.core.netdev_max_backlog = 10000
21vm.overcommit_memory = 2

通过sysctl命令使之生效。

1[root@mdw-std ~]# sysctl -p

5.配置资源限额(limit)

1[root@mdw-std ~]# vi /etc/security/limits.conf
2gpadmin  soft  nofile  65536
3gpadmin  hard  nofile  65536
4gpadmin  soft  nproc  131072
5gpadmin  hard  nproc  131072

6.修改磁盘模式,/boot/grub/menu.lst文件增加elevator=deadline。

01[root@mdw-std ~]# echo deadline > /sys/block/sda/queue/scheduler
02[root@mdw-std ~]# cat /sys/block/sda/queue/scheduler
03noop [deadline] cfq
04 
05[root@mdw-std ~]# vi /boot/grub/menu.lst
06# grub.conf generated by anaconda
07#
08# Note that you do not have to rerun grub after making changes to this file
09# NOTICE:  You have a /boot partition.  This means that
10#          all kernel and initrd paths are relative to /boot/, eg.
11#          root (hd0,0)
12#          kernel /vmlinuz-version ro root=/dev/sda3
13#          initrd /initrd-[generic-]version.img
14#boot=/dev/sda
15default=0
16timeout=5
17splashimage=(hd0,0)/grub/splash.xpm.gz
18hiddenmenu
19title Oracle Linux Server Unbreakable Enterprise Kernel (3.8.13-16.2.1.el6uek.x86_64)
20        root (hd0,0)
21        kernel /vmlinuz-3.8.13-16.2.1.el6uek.x86_64 ro root=UUID=1fb9c4cf-3cf4-47db-bf4a-0a87958d477d rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16   KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
22        initrd /initramfs-3.8.13-16.2.1.el6uek.x86_64.img
23title Oracle Linux Server Red Hat Compatible Kernel (2.6.32-431.el6.x86_64)
24        root (hd0,0)
25        kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=UUID=1fb9c4cf-3cf4-47db-bf4a-0a87958d477d rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM elevator=deadline rhgb quiet
26        initrd /initramfs-2.6.32-431.el6.x86_64.img

7.配置磁盘预读块

1[root@mdw-std ~]# /sbin/blockdev --setra 65535 /dev/sda3

8.安装GreenPlum数据库软件

1[gpadmin@mdw-std gpdb]$ /bin/bash greenplum-db-4.3.6.2-build-1-RHEL5-x86_64.bin

跳过一堆的许可信息后,会提示license相关的信息。

1********************************************************************************
2Do you accept the Pivotal Database license agreement? [yes|no]
3********************************************************************************

输入yes确认后,会提示安装路径相关信息。

1yes
2 
3********************************************************************************
4Provide the installation path for Greenplum Database or press ENTER to
5accept the default installation path: /usr/local/greenplum-db-4.3.6.2
6********************************************************************************

这里使用和Master相同的安装路径。

1/gpdb/app
2 
3********************************************************************************
4Install Greenplum Database into </gpdb/app>? [yes|no]
5********************************************************************************

输入yes确认安装路径。

01yes
02 
03Extracting product to /gpdb/app
04 
05********************************************************************************
06Installation complete.
07Greenplum Database is installed in /gpdb/app
08 
09Pivotal Greenplum documentation is available
10for download at http://docs.gopivotal.com/gpdb
11********************************************************************************

GreenPlum数据库软件安装完成,需要配置gpadmin用户的环境变量,增加以下内容。

1[gpadmin@mdw-std app]$ vi /home/gpadmin/.bash_profile
2source /gpdb/app/greenplum_path.sh

使之生效。

1[gpadmin@mdw-std app]$ . /home/gpadmin/.bash_profile

9.Standby主机创建数据库初始化数据存放目录及文件空间目录,需要和Master目录一样。

1[gpadmin@mdw-std gpdata]$ mkdir /gpdb/gpdata/master
2[gpadmin@mdw-std gpdata]$ mkdir /gpdb/gpdata/fspc_master

以上均是在Standby节点的主机上操作,接下来是在Master节点的主机上操作。

10.在Master节点配置所有节点于Standby节点的ssh互信。

01[gpadmin@mdw config]$ vi hostlist
02 
03mdw
04mdw-std
05sdw1
06sdw2
07sdw3
08 
09[gpadmin@mdw config]$ gpssh-exkeys -f hostlist
10[STEP 1 of 5] create local ID and authorize on local host
11  ... /home/gpadmin/.ssh/id_rsa file exists ... key generation skipped
12 
13[STEP 2 of 5] keyscan all hosts and update known_hosts file
14 
15[STEP 3 of 5] authorize current user on remote hosts
16  ... send to mdw-std
17  ***
18  *** Enter password for mdw-std:
19  ... send to sdw1
20  ... send to sdw2
21  ... send to sdw3
22 
23[STEP 4 of 5] determine common authentication file content
24 
25[STEP 5 of 5] copy authentication files to all remote hosts
26  ... finished key exchange with mdw-std
27  ... finished key exchange with sdw1
28  ... finished key exchange with sdw2
29  ... finished key exchange with sdw3
30 
31[INFO] completed successfully

通过gpssh工具测试互信设置是否成功。

01[gpadmin@mdw config]$ gpssh -f hostlist -e 'date'
02[mdw-std] date
03[mdw-std] Tue Feb 16 17:50:16 CST 2016
04[    mdw] date
05[    mdw] Tue Feb 16 17:50:16 CST 2016
06[   sdw2] date
07[   sdw2] Tue Feb 16 17:50:16 CST 2016
08[   sdw1] date
09[   sdw1] Tue Feb 16 17:50:16 CST 2016
10[   sdw3] date
11[   sdw3] Tue Feb 16 17:50:16 CST 2016

11.在Master节点通过gpinitstandby命令添加Standby。

01[gpadmin@mdw config]$ gpinitstandby -s mdw-std
02 
03The filespace locations on the master must be mapped to
04locations on the standby.  These locations must be empty on the
05standby master host.  The default provided is the location of
06the filespace on the master (except if the master and the
07standby are hosted on the same node or host). In most cases the
08defaults can be used.
09 
10Enter standby filespace location for filespace fspc1 (default: /gpdb/gpdata/fspc_master/gpseg-1):

这里会提示Master节点中记录的文件空间的目录,默认和Master相同,这里可以使用和Master节点不同的路径,建议使用一样的路径。需要注意,在这个步骤之前,一定要创建和Master一样的数据库初始化目录,这里是/gpdb/gpdata/master目录,否则会报这个路径找不到的错误。如果文件空间直接使用默认路径,直接回车就可以。

01>
0220160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
0320160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Checking for filespace directory /gpdb/gpdata/master/gpseg-1 on mdw-std
0420160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Checking for filespace directory /gpdb/gpdata/fspc_master/gpseg-1 on mdw-std
0520160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
0620160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master initialization parameters
0720160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
0820160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master hostname               = mdw
0920160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master data directory         = /gpdb/gpdata/master/gpseg-1
1020160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master port                   = 5432
1120160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master hostname       = mdw-std
1220160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master port           = 5432
1320160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master data directory = /gpdb/gpdata/master/gpseg-1
1420160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum update system catalog         = On
1520160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
1620160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:- Filespace locations
1720160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
1820160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-pg_system -> /gpdb/gpdata/master/gpseg-1
1920160216:17:54:26:011967 gpinitstandby:mdw:gpadmin-[INFO]:-fspc1 -> /gpdb/gpdata/fspc_master/gpseg-1
20Do you want to continue with standby master initialization? Yy|Nn (default=N):

输入Y确认搭建Standby。

01> y
0220160216:17:54:56:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
0320160216:17:54:56:011967 gpinitstandby:mdw:gpadmin-[INFO]:-The packages on mdw-std are consistent.
0420160216:17:54:56:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Adding standby master to catalog...
0520160216:17:54:56:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Database catalog updated successfully.
0620160216:17:54:56:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Updating pg_hba.conf file...
0720160216:17:55:02:011967 gpinitstandby:mdw:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
0820160216:17:55:04:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Updating filespace flat files...
0920160216:17:55:04:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Filespace flat file updated successfully.
1020160216:17:55:04:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Starting standby master
1120160216:17:55:04:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Checking if standby master is running on host: mdw-std  in directory: /gpdb/gpdata/master/gpseg-1
1220160216:17:55:05:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
1320160216:17:55:11:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
1420160216:17:55:11:011967 gpinitstandby:mdw:gpadmin-[INFO]:-Successfully created standby master on mdw-std

搭建完成,可以使用gpstate命令查看Standby的状态。

01[gpadmin@mdw config]$ gpstate -s
0220160216:17:55:37:012095 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -s
0320160216:17:55:37:012095 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.6.2 build 1'
0420160216:17:55:37:012095 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.6.2 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Nov 12 2015 23:50:28'
0520160216:17:55:37:012095 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
0620160216:17:55:37:012095 gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments...
07.
0820160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
0920160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:--Master Configuration & Status
1020160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
1120160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master host                    = mdw
1220160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master postgres process ID     = 2474
1320160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master data directory          = /gpdb/gpdata/master/gpseg-1
1420160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master port                    = 5432
1520160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master current role            = dispatch
1620160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Greenplum initsystem version   = 4.3.6.2 build 1
1720160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Greenplum current version      = PostgreSQL 8.2.15 (Greenplum Database 4.3.6.2 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Nov 12 2015 23:50:28
1820160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Postgres version               = 8.2.15
1920160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Master standby                 = mdw-std
2020160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Standby master state           = Standby host passive
2120160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
2220160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-Segment Instance Status Report
2320160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
2420160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Segment Info
2520160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Hostname                          = sdw1
2620160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Address                           = sdw1
2720160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Datadir                           = /gpdb/gpdata/primary/gpseg0
2820160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Port                              = 40000
2920160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Status
3020160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      PID                               = 2378
3120160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Configuration reports status as   = Up
3220160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Database status                   = Up
3320160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
3420160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Segment Info
3520160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Hostname                          = sdw2
3620160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Address                           = sdw2
3720160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Datadir                           = /gpdb/gpdata/primary/gpseg1
3820160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Port                              = 40000
3920160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Status
4020160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      PID                               = 2362
4120160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Configuration reports status as   = Up
4220160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Database status                   = Up
4320160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
4420160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Segment Info
4520160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Hostname                          = sdw3
4620160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Address                           = sdw3
4720160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Datadir                           = /gpdb/gpdata/primary/gpseg2
4820160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Port                              = 40000
4920160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-   Status
5020160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      PID                               = 2384
5120160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Configuration reports status as   = Up
5220160216:17:55:38:012095 gpstate:mdw:gpadmin-[INFO]:-      Database status                   = Up

也可以在Standby节点的主机上查看启动的进程来查看Standby的状态。

1[gpadmin@mdw-std gpdata]$ ps -ef | grep gpadmin
2gpadmin   4362     1  0 17:55 ?        00:00:00 /gpdb/app/bin/postgres -D /gpdb/gpdata/master/gpseg-1 -p 5432 -b 5 -z 3 --silent-mode=true -i -M master -C -1 -x 0 -y -E
3gpadmin   4378  4362  0 17:55 ?        00:00:00 postgres: port  5432, master logger process                     
4gpadmin   4379  4362  0 17:55 ?        00:00:00 postgres: port  5432, startup process   recovering 000000010000000000000002
5gpadmin   4390  4362  0 17:55 ?        00:00:00 postgres: port  5432, wal receiver process 

第一个进程是Standby的启动进程也就是守护进程,第二个进程是日志传输进程,第三个进程是数据同步进程也就是数据恢复进程,第四个进程是Standby与Master之间的通信检测进程,相当于心跳,以上进程的作用均是个人的猜测,有可能不准。

Standby搭建完成,即可在数据库中查询到Standby的信息。

01[gpadmin@mdw config]$ psql -d dbdream
02psql (8.2.15)
03Type "help" for help.
04 
05dbdream=# select * from gp_segment_configuration;
06 dbid | content | role | preferred_role | mode | status | port  | hostname | address | replication_port | san_mounts
07------+---------+------+----------------+------+--------+-------+----------+---------+------------------+--------
08    1 |      -1 | p    | p              | s    | u      |  5432 | mdw      | mdw     |                  |
09    2 |       0 | p    | p              | s    | u      | 40000 | sdw1     | sdw1    |                  |
10    3 |       1 | p    | p              | s    | u      | 40000 | sdw2     | sdw2    |                  |
11    4 |       2 | p    | p              | s    | u      | 40000 | sdw3     | sdw3    |                  |
12    5 |      -1 | m    | m              | s    | u      |  5432 | mdw-std  | mdw-std |                  |
13(5 rows)

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
对于Greenplum数据库的分布式部署,您需要遵循以下步骤: 1. 首先,确保您的系统符合Greenplum的硬件要求,并具备必要的操作系统和软件依赖。 2. 下载Greenplum数据库的安装包(greenplum-db-6.13.0-rhel7-x86_64.rpm),并将其复制到所有节点上。 3. 在每个节点上安装Greenplum数据库软件包。您可以使用以下命令进行安装: ``` rpm -ivh greenplum-db-6.13.0-rhel7-x86_64.rpm ``` 4. 创建一个主节点和多个段节点的配置文件(gpinitsystem_config),该文件指定了Greenplum数据库的分布式配置。您可以使用以下命令创建配置文件: ``` gpinitsystem -c gpinitsystem_config ``` 5. 编辑配置文件(gpinitsystem_config),指定主节点和段节点的主机名、IP地址、端口号等信息。确保所有节点都在配置文件中正确配置。 6. 在主节点上运行gpinitsystem命令以初始化Greenplum数据库集群。这将创建数据库实例并启动相关服务。您可以使用以下命令进行初始化: ``` gpinitsystem -c gpinitsystem_config ``` 7. 在每个段节点上启动Greenplum数据库服务。您可以使用以下命令启动服务: ``` gpstart -a ``` 8. 检查Greenplum数据库集群的状态,确保所有节点都已成功启动。您可以使用以下命令进行检查: ``` gpstate -a ``` 以上是Greenplum数据库的基本分布式部署步骤。请注意,这只是一个概述,并且需要根据您的特定环境进行适当的调整和配置。建议您参考Greenplum官方文档以获取更详细的部署指南和最佳实践。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值