首先,出现这种问题,大概率是你的GP集群在跑什么任务导致的,所以,不用管,直接启动数据库,发现报错如下
[gpadmin@htsp157 data]$ gpstart -a
20230512:11:03:52:012819 gpstart:htsp157:gpadmin-[INFO]:-Starting gpstart with args: -a
20230512:11:03:52:012819 gpstart:htsp157:gpadmin-[INFO]:-Gathering information and validating the environment...
20230512:11:03:52:012819 gpstart:htsp157:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 6.2.1 build commit:d90ac1a1b983b913b3950430d4d9e47ee8827fd4'
20230512:11:03:52:012819 gpstart:htsp157:gpadmin-[INFO]:-Greenplum Catalog Version: '301908232'
20230512:11:03:52:012819 gpstart:htsp157:gpadmin-[WARNING]:-postmaster.pid file exists on Master, checking if recovery startup required
20230512:11:03:52:012819 gpstart:htsp157:gpadmin-[INFO]:-Commencing recovery startup checks
20230512:11:03:52:012819 gpstart:htsp157:gpadmin-[INFO]:-have lock file /tmp/.s.PGSQL.15432 but a process running on port 15432
20230512:11:03:52:012819 gpstart:htsp157:gpadmin-[ERROR]:-gpstart error: Port 15432 is already in use
找到文件当中提到的/tmp/下的文件,删掉他,然后通过命令
[gpadmin@htsp157 data]$ lsof -i tcp:15432
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
postgres 1027 gpadmin 3u IPv4 628871 0t0 TCP *:15432 (LISTEN)
postgres 1027 gpadmin 4u IPv6 628872 0t0 TCP *:15432 (LISTEN)
postgres 8795 gpadmin 10u IPv4 668364 0t0 TCP 381 (ESTABLISHED)
找到被占用的端口号,然后kill -9 全部干掉,最后启动数据库,成功
[gpadmin@htsp157 data]$ gpstart -a
20230512:11:05:18:012907 cenos1:gpadmin-[INFO]:-Starting gpstart with args: -a
20230512:11:05:18:012907 cenos1:gpadmin-[INFO]:-Gathering information and validating the environment...
20230512:11:05:18:012907 cenos1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 6.2.1 build commit:d90ac1a1b983b913b3950430d4d9e47ee8827fd4'
20230512:11:05:18:012907 cenos1:gpadmin-[INFO]:-Greenplum Catalog Version: '301908232'
20230512:11:05:18:012907 cenos1:gpadmin-[WARNING]:-postmaster.pid file exists on Master, checking if recovery startup required
20230512:11:05:18:012907 cenos1:gpadmin-[INFO]:-Commencing recovery startup checks
20230512:11:05:18:012907 cenos1:gpadmin-[INFO]:-No socket connection or lock file in /tmp found for port=15432
20230512:11:05:18:012907 cenos1:gpadmin-[INFO]:-No Master instance process, entering recovery startup mode
20230512:11:05:18:012907 cenos1:gpadmin-[INFO]:-Clearing Master instance pid file
20230512:11:05:18:012907 cenos1:gpadmin-[INFO]:-Starting Master instance in admin mode
20230512:11:05:19:012907 cenos1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20230512:11:05:19:012907 cenos1:gpadmin-[INFO]:-Obtaining Segment details from master...
20230512:11:05:19:012907 cenos1:gpadmin-[INFO]:-Setting new master era
20230512:11:05:19:012907 cenos1:gpadmin-[INFO]:-Commencing forced instance shutdown
20230512:11:05:19:012907 cenos1:gpadmin-[INFO]:-Starting Master instance in admin mode
20230512:11:05:19:012907 cenos1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20230512:11:05:19:012907 cenos1:gpadmin-[INFO]:-Obtaining Segment details from master...
20230512:11:05:19:012907 cenos1:gpadmin-[INFO]:-Setting new master era
20230512:11:05:19:012907 cenos1:gpadmin-[INFO]:-Master Started...
20230512:11:05:19:012907 cenos1:gpadmin-[INFO]:-Shutting down master
20230512:11:05:20:012907 cenos1:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
20230512:11:05:20:012907 cenos1:gpadmin-[INFO]:-Process results...
20230512:11:05:20:012907 cenos1:gpadmin-[INFO]:-----------------------------------------------------
20230512:11:05:20:012907 cenos1:gpadmin-[INFO]:- Successful segment starts = 6
20230512:11:05:20:012907 cenos1:gpadmin-[INFO]:- Failed segment starts = 0
20230512:11:05:20:012907 cenos1:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0
20230512:11:05:20:012907 cenos1:gpadmin-[INFO]:-----------------------------------------------------