greenplum如何恢复故障segment

本文档记录了在Greenplum环境中,如何处理故障segment并切换镜像角色。通过`gpstart`命令启动并检查GPDB状态,发现多个mirror segment充当了primary角色。使用`gpstate`命令确认了这一问题,接着通过`gprecoverseg -r`命令进行恢复。在执行恢复操作时,需确保故障segment不被启动,以保证数据一致性。最终,所有segment恢复正常运行。
摘要由CSDN通过智能技术生成

环境如下:

more /etc/redhat-release
Red Hat Enterprise Linux Server release 7.5 (Maipo)
系统安装采取最小化安装。

greenplum-db-5.16.0-rhel7-x86_64.zip

more /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.15.201  rhmdw
192.168.15.202  rhsdw1
192.168.15.203  rhsdw2
192.168.15.205  rhsdw03
192.168.15.206  rhsdw04

greenplum如何恢复故障segment及如何切换mirror和primary

--启动gp
[gpadmin@rhmdw ~]$  gpssh -f /gp/app/config/hostlist -e 'netstat -nltp | grep postgres'
[  rhmdw] netstat -nltp | grep postgres
[  rhmdw] (Not all processes could be identified, non-owned process info
[  rhmdw]  will not be shown, you would have to be root to see it all.)
[rhsdw03] netstat -nltp | grep postgres
[rhsdw03] (Not all processes could be identified, non-owned process info
[rhsdw03]  will not be shown, you would have to be root to see it all.)
[ rhsdw2] netstat -nltp | grep postgres
[ rhsdw2] (No info could be read for "-p": geteuid()=1000 but you should be root.)
[rhsdw04] netstat -nltp | grep postgres
[rhsdw04] (Not all processes could be identified, non-owned process info
[rhsdw04]  will not be shown, you would have to be root to see it all.)
[ rhsdw1] netstat -nltp | grep postgres
[ rhsdw1] (No info could be read for "-p": geteuid()=1000 but you should be root.)
[gpadmin@rhmdw ~]$ gpstart -a
20190404:13:44:18:002757 gpstart:rhmdw:gpadmin-[INFO]:-Starting gpstart with args: -a
20190404:13:44:18:002757 gpstart:rhmdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20190404:13:44:18:002757 gpstart:rhmdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44'
20190404:13:44:18:002757 gpstart:rhmdw:gpadmin-[INFO]:-Greenplum Catalog Version: '301705051'
20190404:13:44:18:002757 gpstart:rhmdw:gpadmin-[INFO]:-Starting Master instance in admin mode
20190404:13:44:19:002757 gpstart:rhmdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20190404:13:44:19:002757 gpstart:rhmdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20190404:13:44:19:002757 gpstart:rhmdw:gpadmin-[INFO]:-Setting new master era
20190404:13:44:19:002757 gpstart:rhmdw:gpadmin-[INFO]:-Master Started...
20190404:13:44:19:002757 gpstart:rhmdw:gpadmin-[INFO]:-Shutting down master
20190404:13:44:20:002757 gpstart:rhmdw:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
...
20190404:13:44:23:002757 gpstart:rhmdw:gpadmin-[INFO]:-Process results...
20190404:13:44:23:002757 gpstart:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:13:44:23:002757 gpstart:rhmdw:gpadmin-[INFO]:-   Successful segment starts                                            = 5   <<===============启动成功5个seg
20190404:13:44:23:002757 gpstart:rhmdw:gpadmin-[INFO]:-   Failed segment starts                                                = 0
20190404:13:44:23:002757 gpstart:rhmdw:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20190404:13:44:23:002757 gpstart:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:13:44:23:002757 gpstart:rhmdw:gpadmin-[INFO]:-Successfully started 5 of 5 segment instances
20190404:13:44:23:002757 gpstart:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:13:44:23:002757 gpstart:rhmdw:gpadmin-[INFO]:-Starting Master instance rhmdw directory /gp/gpdata/master/gpseg-1
20190404:13:44:25:002757 gpstart:rhmdw:gpadmin-[INFO]:-Command pg_ctl reports Master rhmdw instance active
20190404:13:44:25:002757 gpstart:rhmdw:gpadmin-[INFO]:-No standby master configured.  skipping...
20190404:13:44:25:002757 gpstart:rhmdw:gpadmin-[INFO]:-Database successfully started
[gpadmin@rhmdw ~]$

--查看状态:
[gpadmin@rhmdw ~]$ gpstate -m
20190404:13:45:20:002985 gpstate:rhmdw:gpadmin-[INFO]:-Starting gpstate with args: -m
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44'
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Jan 16 2019 02:32:15'
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:--------------------------------------------------------------
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:--Current GPDB mirror list and status
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:--Type = Spread
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:--------------------------------------------------------------
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:-   Mirror    Datadir                    Port   Status              Data Status       
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:-   rhsdw2    /gp/gpdata/mirror/gpseg0   7000   Acting as Primary   Change Tracking
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:-   rhsdw1    /gp/gpdata/mirror/gpseg1   7000   Acting as Primary   Change Tracking
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:-   rhsdw04   /gp/gpdata/mirror/gpseg2   7000   Passive             Synchronized
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:-   rhsdw03   /gp/gpdata/mirror/gpseg3   7000   Acting as Primary   Change Tracking
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[INFO]:--------------------------------------------------------------
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[WARNING]:-3 segment(s) configured as mirror(s) are acting as primaries
20190404:13:45:21:002985 gpstate:rhmdw:gpadmin-[WARNING]:-3 mirror segment(s) acting as primaries are in change tracking

查看状态:
rhsdw2、rhsdw1、rhsdw03主机mirror 充当primary启动,检查端口,发现故障节点6000端口未启动
[gpadmin@rhmdw ~]$  gpssh -f /gp/app/config/hostlist -e 'netstat -nltp | grep postgres'
[ rhsdw2] netstat -nltp | grep postgres
[ rhsdw2] (Not all processes could be identified, non-owned process info
[ rhsdw2]  will not be shown, you would have to be root to see it all.)
[ rhsdw2] tcp        0      0 0.0.0.0:7000            0.0.0.0:*               LISTEN      13492/postgres      
[ rhsdw2] tcp6       0      0 :::7000                 :::*                    LISTEN      13492/postgres      
[  rhmdw] netstat -nltp | grep postgres
[  rhmdw] (Not all processes could be identified, non-owned process info
[  rhmdw]  will not be shown, you would have to be root to see it all.)
[  rhmdw] tcp        0      0 0.0.0.0:5432            0.0.0.0:*               LISTEN      2833/postgres       
[  rhmdw] tcp6       0      0 :::55763                :::*                    LISTEN      2840/postgres:  543
[  rhmdw] tcp6       0      0 :::5432                 :::*                    LISTEN      2833/postgres       
[rhsdw03] netstat -nltp | grep postgres
[rhsdw03] (Not all processes could be identified, non-owned process info
[rhsdw03]  will not be shown, you would have to be root to see it all.)
[rhsdw03] tcp        0      0 192.168.15.205:9000     0.0.0.0:*               LISTEN      25726/postgres:  60
[rhsdw03] tcp        0      0 0.0.0.0:6000            0.0.0.0:*               LISTEN      25714/postgres      
[rhsdw03] tcp        0      0 0.0.0.0:7000            0.0.0.0:*               LISTEN      25713/postgres      
[rhsdw03] tcp6       0      0 :::6000                 :::*                    LISTEN      25714/postgres      
[rhsdw03] tcp6       0      0 :::7000                 :::*                    LISTEN      25713/postgres      
[ rhsdw1] netstat -nltp | grep postgres
[ rhsdw1] (Not all processes could be identified, non-owned process info
[ rhsdw1]  will not be shown, you would have to be root to see it all.)
[ rhsdw1] tcp        0      0 0.0.0.0:7000            0.0.0.0:*               LISTEN      19957/postgres      
[ rhsdw1] tcp6       0      0 :::7000                 :::*                    LISTEN      19957/postgres      
[rhsdw04] netstat -nltp | grep postgres
[rhsdw04] (Not all processes could be identified, non-owned process info
[rhsdw04]  will not be shown, you would have to be root to see it all.)
[rhsdw04] tcp        0      0 192.168.15.206:8000     0.0.0.0:*               LISTEN      13713/postgres:  70
[rhsdw04] tcp        0      0 0.0.0.0:7000            0.0.0.0:*               LISTEN      13707/postgres      
[rhsdw04] tcp6       0      0 :::7000                 :::*                    LISTEN      13707/postgres
通过gpstate -b查看,发现三个seg端postmaster.pid文件丢失
[gpadmin@rhmdw ~]$ gpstate -b
20190404:13:45:35:003051 gpstate:rhmdw:gpadmin-[INFO]:-Starting gpstate with args: -b
20190404:13:45:36:003051 gpstate:rhmdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44'
20190404:13:45:36:003051 gpstate:rhmdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Jan 16 2019 02:32:15'
20190404:13:45:36:003051 gpstate:rhmdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20190404:13:45:36:003051 gpstate:rhmdw:gpadmin-[INFO]:-Gathering data from segments...
..
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-Greenplum instance status summary
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Master instance                                           = Active
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Master standby                                            = No master standby configured
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total segment instance count from metadata                = 8
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Primary Segment Status
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total primary segments                                    = 4
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total primary segment valid (at master)                   = 1
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[WARNING]:-Total primary segment failures (at master)                = 3                              <<<<<<<<
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[WARNING]:-Total number of postmaster.pid files missing              = 3                              <<<<<<<<
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 1
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[WARNING]:-Total number of postmaster.pid PIDs missing               = 3                              <<<<<<<<
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 1
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[WARNING]:-Total number of /tmp lock files missing                   = 3                              <<<<<<<<
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 1
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[WARNING]:-Total number postmaster processes missing                 = 3                              <<<<<<<<
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 1
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Mirror Segment Status
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total mirror segments                                     = 4
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total mirror segment valid (at master)                    = 4
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total mirror segment failures (at master)                 = 0
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 4
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 4
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 4
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 4
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[WARNING]:-Total number mirror segments acting as primary segments   = 3                              <<<<<<<<
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-   Total number mirror segments acting as mirror segments    = 1
20190404:13:45:38:003051 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------

检查正常节点启动进程,发现是postgres启动,尝试在故障seg端启动
[root@rhsdw03 ~]# ps -ef |grep postgre
gpadmin  25713     1  0 13:43 ?        00:00:00 /gp/app/bin/postgres -D /gp/gpdata/mirror/gpseg3 -p 7000 --gp_dbid=8 --gp_num_contents_in_cluster=4 --silent-mode=true -i -M quiescent --gp_contentid=3
gpadmin  25714     1  0 13:43 ?        00:00:00 /gp/app/bin/postgres -D /gp/gpdata/primary/gpseg2 -p 6000 --gp_dbid=6 --gp_num_contents_in_cluster=4 --silent-mode=true -i -M quiescent --gp_contentid=2
gpadmin  25715 25713  0 13:43 ?        00:00:00 postgres:  7000, logger process   
gpadmin  25716 25714  0 13:43 ?        00:00:00 postgres:  6000, logger process   
gpadmin  25723 25713  0 13:43 ?        00:00:00 postgres:  7000, primary process   
gpadmin  25724 25714  0 13:43 ?        00:00:00 postgres:  6000, primary process   
gpadmin  25725 25723  0 13:43 ?        00:00:00 postgres:  7000, primary recovery process   
gpadmin  25726 25724  0 13:43 ?        00:00:00 postgres:  6000, primary receiver ack process   
gpadmin  25727 25724  0 13:43 ?        00:00:00 postgres:  6000, primary sender process   
gpadmin  25728 25724  0 13:43 ?        00:00:00 postgres:  6000, primary consumer ack process   
gpadmin  25729 25724  0 13:43 ?        00:00:00 postgres:  6000, primary recovery process   
gpadmin  25732 25713  0 13:43 ?        00:00:00 postgres:  7000, stats collector process   
gpadmin  25733 25713  0 13:43 ?        00:00:00 postgres:  7000, writer process   
gpadmin  25734 25713  0 13:43 ?        00:00:00 postgres:  7000, checkpointer process   
gpadmin  25735 25713  0 13:43 ?        00:00:00 postgres:  7000, sweeper process   
gpadmin  25736 25713  0 13:43 ?        00:00:00 postgres:  7000, stats sender process   
gpadmin  25737 25713  0 13:43 ?        00:00:00 postgres:  7000, wal writer process   
gpadmin  25740 25714  0 13:43 ?        00:00:00 postgres:  6000, stats collector process   
gpadmin  25741 25714  0 13:43 ?        00:00:00 postgres:  6000, writer process   
gpadmin  25742 25714  0 13:43 ?        00:00:00 postgres:  6000, checkpointer process   
gpadmin  25743 25714  0 13:43 ?        00:00:00 postgres:  6000, sweeper process   
gpadmin  25744 25714  0 13:43 ?        00:00:00 postgres:  6000, stats sender process   
gpadmin  25745 25714  0 13:43 ?        00:00:00 postgres:  6000, wal writer process   
root     26091 24220  0 13:51 pts/8    00:00:00 grep --color=auto postgre

启动参数内容
[gpadmin@rhsdw03 gpseg2]$ more /gp/gpdata/primary/gpseg2/postmaster.opts
/gp/app/bin/postgres "-D" "/gp/gpdata/primary/gpseg2" "-p" "6000" "--gp_dbid=6" "--gp_num_contents_in_cluster=4" "--silent-mode=true" "-i" "-M" "quiescent" "--gp_contentid=2"

启动故障节点:
[gpadmin@rhsdw04 gpseg3]$ /gp/app/bin/postgres "-D" "/gp/gpdata/primary/gpseg3" "-p" "6000" "--gp_dbid=7" "--gp_num_contents_in_cluster=4" "--silent-mode=true" "-i" "-M" "quiescent" "--gp_contentid=3"
[gpadmin@rhsdw04 gpseg3]$ ps -ef | grep postgres
gpadmin  13707     1  0 13:42 ?        00:00:00 /gp/app/bin/postgres -D /gp/gpdata/mirror/gpseg2 -p 7000 --gp_dbid=9 --gp_num_contents_in_cluster=4 --silent-mode=true -i -M quiescent --gp_contentid=2
gpadmin  13708 13707  0 13:42 ?        00:00:00 postgres:  7000, logger process   
gpadmin  13712 13707  0 13:42 ?        00:00:00 postgres:  7000, mirror process   
gpadmin  13713 13712  0 13:42 ?        00:00:00 postgres:  7000, mirror receiver process   
gpadmin  13714 13712  0 13:42 ?        00:00:00 postgres:  7000, mirror consumer process   
gpadmin  13715 13712  0 13:42 ?        00:00:00 postgres:  7000, mirror consumer writer process   
gpadmin  13716 13712  0 13:42 ?        00:00:00 postgres:  7000, mirror consumer append only process   
gpadmin  13717 13712  0 13:42 ?        00:00:00 postgres:  7000, mirror sender ack process   
gpadmin  13867     1  1 13:56 ?        00:00:00 /gp/app/bin/postgres -D /gp/gpdata/primary/gpseg3 -p 6000 --gp_dbid=7 --gp_num_contents_in_cluster=4 --silent-mode=true -i -M quiescent --gp_contentid=3
gpadmin  13868 13867  0 13:56 ?        00:00:00 postgres:  6000, logger process   
gpadmin  13870 12742  0 13:56 pts/7    00:00:00 grep --color=auto postgres

分别启动另外两个故障seg
[gpadmin@rhsdw1 pg_log]$ /gp/app/bin/postgres "-D" "/gp/gpdata/primary/gpseg0" "-p" "6000" "--gp_dbid=2" "--gp_num_contents_in_cluster=4" "--silent-mode=true" "-i" "-M" "quiescent" "--gp_contentid=0"
[gpadmin@rhsdw2 ~]$ /gp/app/bin/postgres "-D" "/gp/gpdata/primary/gpseg1" "-p" "6000" "--gp_dbid=3" "--gp_num_contents_in_cluster=4" "--silent-mode=true" "-i" "-M" "quiescent" "--gp_contentid=1"

管理节点查看状态:
[gpadmin@rhmdw ~]$ gpstate -b
20190404:14:01:08:005024 gpstate:rhmdw:gpadmin-[INFO]:-Starting gpstate with args: -b
20190404:14:01:08:005024 gpstate:rhmdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44'
20190404:14:01:08:005024 gpstate:rhmdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.16.0 build commit:23cec7df0406d69d6552a4bbb77035dba4d7dd44) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Jan 16 2019 02:32:15'
20190404:14:01:08:005024 gpstate:rhmdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20190404:14:01:08:005024 gpstate:rhmdw:gpadmin-[INFO]:-Gathering data from segments...
..
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-Greenplum instance status summary
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Master instance                                           = Active
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Master standby                                            = No master standby configured
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total segment instance count from metadata                = 8
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Primary Segment Status
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total primary segments                                    = 4
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total primary segment valid (at master)                   = 1
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[WARNING]:-Total primary segment failures (at master)                = 3                              <<<<<<<<
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 4
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 4
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 4
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 4
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Mirror Segment Status
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-----------------------------------------------------
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total mirror segments                                     = 4
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total mirror segment valid (at master)                    = 4
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total mirror segment failures (at master)                 = 0
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 4
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 4
20190404:14:01:10:005024 gpstate:rhmdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                  

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值