GP配置gpperfmon、greenplum-perfmon-web

Enabling the Performance Monitor Data Collection Agents

[gpadmin@mdw ~]$ gpperfmon_install --enable --password gpadmin --port 5432

20130327:12:10:16:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/./lib/gpperfmon/gpperfmon3.sql template1 >& /dev/null
20130327:12:10:20:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/./lib/gpperfmon/gpperfmon4.sql gpperfmon >& /dev/null
20130327:12:10:20:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/./lib/gpperfmon/gpperfmon41.sql gpperfmon >& /dev/null
20130327:12:10:21:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/./lib/gpperfmon/gpperfmon42.sql gpperfmon >& /dev/null
20130327:12:10:21:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/./lib/gpperfmon/gpperfmonC.sql template1 >& /dev/null
20130327:12:10:21:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 psql template1 -c "DROP ROLE IF EXISTS gpmon"  >& /dev/null
20130327:12:10:21:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 psql template1 -c "CREATE ROLE gpmon WITH SUPERUSER CREATEDB LOGIN ENCRYPTED PASSWORD 'gpadmin'"  >& /dev/null
20130327:12:10:22:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-echo "local    gpperfmon         gpmon         md5" >> /data/gpseg-1/pg_hba.conf
20130327:12:10:22:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-echo "host     all         gpmon         127.0.0.1/28    md5" >> /data/gpseg-1/pg_hba.conf
20130327:12:10:22:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-touch /home/gpadmin/.pgpass >& /dev/null
20130327:12:10:22:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-mv -f /home/gpadmin/.pgpass /home/gpadmin/.pgpass.1364357416 >& /dev/null
20130327:12:10:22:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-echo "*:5432:gpperfmon:gpmon:gpadmin" >> /home/gpadmin/.pgpass
20130327:12:10:22:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-cat /home/gpadmin/.pgpass.1364357416 >> /home/gpadmin/.pgpass
20130327:12:10:22:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-chmod 0600 /home/gpadmin/.pgpass >& /dev/null
20130327:12:10:22:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gp_enable_gpperfmon -v on >& /dev/null
20130327:12:10:27:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gpperfmon_port -v 8888 >& /dev/null
20130327:12:10:33:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gp_external_enable_exec -v on --masteronly >& /dev/null
20130327:12:10:39:010920 gpperfmon_install:mdw:gpadmin-[INFO]:-gpperfmon will be enabled after a full restart of GPDB
[gpadmin@mdw ~]$ gpstop
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-Starting gpstop with args: 
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.2.2.4 build 1'
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:---------------------------------------------
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-Master instance parameters
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:---------------------------------------------
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-   Master Greenplum instance process active PID   = 420
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-   Database                                       = template1
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-   Master port                                    = 5432
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-   Master directory                               = /data/gpseg-1
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-   Shutdown mode                                  = smart
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-   Timeout                                        = 600
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-   Shutdown Master standby host                   = Off
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:---------------------------------------------
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-Segment instances that will be shutdown:
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:---------------------------------------------
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-   Host   Datadir         Port    Status
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-   mdw    /data1/gpseg0   40000   u
20130327:12:10:53:011830 gpstop:mdw:gpadmin-[INFO]:-   mdw    /data2/gpseg1   40001   u


Continue with Greenplum instance shutdown Yy|Nn (default=N):
> y
20130327:12:10:57:011830 gpstop:mdw:gpadmin-[INFO]:-There are 0 connections to the database
20130327:12:10:57:011830 gpstop:mdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20130327:12:10:57:011830 gpstop:mdw:gpadmin-[INFO]:-Master host=mdw
20130327:12:10:57:011830 gpstop:mdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart
20130327:12:10:57:011830 gpstop:mdw:gpadmin-[INFO]:-Master segment instance directory=/data/gpseg-1
20130327:12:10:58:011830 gpstop:mdw:gpadmin-[INFO]:-No standby master host configured
20130327:12:10:58:011830 gpstop:mdw:gpadmin-[INFO]:-Commencing parallel segment instance shutdown, please wait...
... 
20130327:12:11:01:011830 gpstop:mdw:gpadmin-[INFO]:-----------------------------------------------------
20130327:12:11:01:011830 gpstop:mdw:gpadmin-[INFO]:-   Segments stopped successfully      = 2
20130327:12:11:01:011830 gpstop:mdw:gpadmin-[INFO]:-   Segments with errors during stop   = 0
20130327:12:11:01:011830 gpstop:mdw:gpadmin-[INFO]:-----------------------------------------------------
20130327:12:11:01:011830 gpstop:mdw:gpadmin-[INFO]:-Successfully shutdown 2 of 2 segment instances 
20130327:12:11:01:011830 gpstop:mdw:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
[gpadmin@mdw ~]$ gpstart
20130327:12:11:08:011892 gpstart:mdw:gpadmin-[INFO]:-Starting gpstart with args: 
20130327:12:11:08:011892 gpstart:mdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20130327:12:11:08:011892 gpstart:mdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.2.2.4 build 1'
20130327:12:11:08:011892 gpstart:mdw:gpadmin-[INFO]:-Greenplum Catalog Version: '201109210'
20130327:12:11:08:011892 gpstart:mdw:gpadmin-[INFO]:-Starting Master instance in admin mode
20130327:12:11:09:011892 gpstart:mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20130327:12:11:09:011892 gpstart:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20130327:12:11:09:011892 gpstart:mdw:gpadmin-[INFO]:-Setting new master era
20130327:12:11:09:011892 gpstart:mdw:gpadmin-[INFO]:-Master Started...
20130327:12:11:09:011892 gpstart:mdw:gpadmin-[INFO]:-Checking for filespace consistency
20130327:12:11:09:011892 gpstart:mdw:gpadmin-[INFO]:-Obtaining current filespace entries used by TRANSACTION_FILES
20130327:12:11:09:011892 gpstart:mdw:gpadmin-[INFO]:-TRANSACTION_FILES OIDs are consistent for pg_system filespace
20130327:12:11:10:011892 gpstart:mdw:gpadmin-[INFO]:-TRANSACTION_FILES entries are consistent for pg_system filespace
20130327:12:11:10:011892 gpstart:mdw:gpadmin-[INFO]:-Checking for filespace consistency
20130327:12:11:10:011892 gpstart:mdw:gpadmin-[INFO]:-Obtaining current filespace entries used by TEMPORARY_FILES
20130327:12:11:10:011892 gpstart:mdw:gpadmin-[INFO]:-TEMPORARY_FILES OIDs are consistent for pg_system filespace
20130327:12:11:11:011892 gpstart:mdw:gpadmin-[INFO]:-TEMPORARY_FILES entries are consistent for pg_system filespace
20130327:12:11:11:011892 gpstart:mdw:gpadmin-[INFO]:-Shutting down master
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:---------------------------
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:-Master instance parameters
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:---------------------------
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:-Database                 = template1
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:-Master Port              = 5432
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:-Master directory         = /data/gpseg-1
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:-Timeout                  = 600 seconds
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:-Master standby           = Off 
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:---------------------------------------
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:-Segment instances that will be started
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:---------------------------------------
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:-   Host   Datadir         Port
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:-   mdw    /data1/gpseg0   40000
20130327:12:11:12:011892 gpstart:mdw:gpadmin-[INFO]:-   mdw    /data2/gpseg1   40001


Continue with Greenplum instance startup Yy|Nn (default=N):
> y
20130327:12:11:14:011892 gpstart:mdw:gpadmin-[INFO]:-No standby master configured.  skipping...
20130327:12:11:14:011892 gpstart:mdw:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
.. 
20130327:12:11:16:011892 gpstart:mdw:gpadmin-[INFO]:-Process results...
20130327:12:11:16:011892 gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
20130327:12:11:16:011892 gpstart:mdw:gpadmin-[INFO]:-   Successful segment starts                                            = 2
20130327:12:11:16:011892 gpstart:mdw:gpadmin-[INFO]:-   Failed segment starts                                                = 0
20130327:12:11:16:011892 gpstart:mdw:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20130327:12:11:16:011892 gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
20130327:12:11:16:011892 gpstart:mdw:gpadmin-[INFO]:-
20130327:12:11:16:011892 gpstart:mdw:gpadmin-[INFO]:-Successfully started 2 of 2 segment instances 
20130327:12:11:16:011892 gpstart:mdw:gpadmin-[INFO]:-----------------------------------------------------
20130327:12:11:16:011892 gpstart:mdw:gpadmin-[INFO]:-Starting Master instance mdw directory /data/gpseg-1 
20130327:12:11:17:011892 gpstart:mdw:gpadmin-[INFO]:-Command pg_ctl reports Master mdw instance active
20130327:12:11:17:011892 gpstart:mdw:gpadmin-[INFO]:-Database successfully started
[gpadmin@mdw ~]$ ps -ef | grep gpmmon
gpadmin  12546 12528  0 12:11 ?        00:00:00 /usr/local/greenplum-db-4.2.2.4/bin/gpmmon -D /data/gpseg-1/gpperfmon/conf/gpperfmon.conf -p 5432
gpadmin  12926  9951  0 12:11 pts/3    00:00:00 grep gpmmon
[gpadmin@mdw ~]$ psql gpperfmon -c 'SELECT * FROM system_now;'
 ctime | hostname | mem_total | mem_used | mem_actual_used | mem_actual_free | swap_total | swap_used | swap_page_in | swap_page_out | cpu_user | cpu_sys | cpu_idle | l
oad0 | load1 | load2 | quantum | disk_ro_rate | disk_wo_rate | disk_rb_rate | disk_wb_rate | net_rp_rate | net_wp_rate | net_rb_rate | net_wb_rate 
-------+----------+-----------+----------+-----------------+-----------------+------------+-----------+--------------+---------------+----------+---------+----------+--
-----+-------+-------+---------+--------------+--------------+--------------+--------------+-------------+-------------+-------------+-------------
(0 rows)


[gpadmin@mdw ~]$ 

Install the greenplum performance monitor console

1,下载适合自己机器的greenplum-perfmon-web安装程序
2,解压并执行greenplum-perfmon-web的执行文件
#unzip greenplum-perfmon-web-4.1.x.x-PLATFORM.zip
#greenplum-perfmon-web-4.1.x.x-PLATFORM.bin
3,修改文件的属组
# chown -R gpadmin.gpadmin /usr/local/greenplum-perfmon-web-4.1.x.x
4,加载环境配置文件
#source /usr/local/greenplum-perfmon-web-4.1.x.x/gpperfmon_path.sh
Setting up the perfmance monitor console
1,用gpadmin用户登录
#su – gpadmin
2,启动安装命令
$ gpperfmon –setup
填写相关信息,默认perfmon端口888,web端口28080,可以启用SSL
3,,启动perfmon
$ gpperfmon –start
访问https://IP:28080

输入用户名和密码登录就OK了。

实验:

su - root
#unzip greenplum-cc-web-1.2.0.1-build-2-RHEL5-x86_64.zip
#./greenplum-cc-web-1.2.0.1-build-2-RHEL5-x86_64.bin
#chown -R gpadmin:gpadmin /usr/local/greenplum-cc-web-1.2.0.1-build-2


#echo source /usr/local/greenplum-cc-web-1.2.0.1-build-2/gpcc_path.sh >> ~/.bashrc
vi ~/.bashrc增加:
GPPERFMONHOME=/usr/local/greenplum-cc-web-1.2.0.1-build-2 
PATH=$GPPERFMONHOME/bin:$PATH 
LD_LIBRARY_PATH=$GPPERFMONHOME/lib:$LD_LIBRARY_PATH 
 
export GPPERFMONHOME 
export PATH 
export LD_LIBRARY_PATH 


#vi /home/gpadmin/gp_init_config
gp_enable_gpperfmon=on
gpperfmon_port=8888
gp_external_enable_exec=on
#su - gpadmin
[gpadmin@mdw ~]$ gpcmdr --setup


An instance name is used by the Greenplum Command Center as
a way to uniquely identify a Greenplum Database that has the monitoring
components installed and configured.  This name is also used to control
specific instances of the Greenplum Command Center web UI.  Instance names
can contain letters, digits and underscores and are not case sensitive.


Please enter a new instance name:
> myperfmon
The web component of the Greenplum Command Center can connect to a
monitor database on a remote Greenplum Database.




Is the master host for the Greenplum Database remote? Yy|Nn (default=N):

The display name is shown in the web interface and does not need to be
a hostname.
        


What would you like to use for the display name for this instance:
> myperfmon_display
What port does the Greenplum Database use? (default=5432):

Creating instance schema in GPDB.  Please wait ...
The Greenplum Command Center runs a small web server for the UI and web API.  
This web server by default runs on port 28080, but you may specify any available port.


What port would you like the web server to use for this instance? (default=28080):

Users logging in to the Command Center must provide database user
credentials.  In order to protect user names and passwords, it is recommended
that SSL be enabled.




Do you want to enable SSL for the Web API Yy|Nn (default=N):
> y
Generating a 1024 bit RSA private key
.......++++++
.....++++++
writing new private key to '/usr/local/greenplum-cc-web-1.2.0.1-build-2/instances/myperfmon/conf/cert.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:cn
State or Province Name (full name) [Some-State]:shanghai
Locality Name (eg, city) []:EMC
Organization Name (eg, company) [Internet Widgits Pty Ltd]:DCD
Organizational Unit Name (eg, section) []:tcyang
Common Name (eg, YOUR name) []:tcyang@163.com
Email Address []:tcyang@163.com


Do you want to enable ipV6 for the Web API Yy|Nn (default=N):
> n


Do you want to copy the instance to a standby master host Yy|Nn (default=Y):
> n


Done writing lighttpd configuration to /usr/local/greenplum-cc-web-1.2.0.1-build-2/instances/myperfmon/conf/lighttpd.conf


Done writing web UI configuration to /usr/local/greenplum-cc-web-1.2.0.1-build-2/instances/myperfmon/conf/gpperfmonui.conf


Greenplum Command Center UI configuration is now complete.  If
at a later date you want to change certain parameters, you can 
either re-run 'gpcmdr --setup' or edit the configuration file
located at /usr/local/greenplum-cc-web-1.2.0.1-build-2/instances/myperfmon/conf/gpperfmonui.conf.


The web UI for this instance is available at https://mdw:28080/


You can now start the web UI for this instance by running: gpcmdr --start myperfmon
[gpadmin@mdw ~]$ gpcmdr --start
Starting instance myperfmon...
Done.
[gpadmin@mdw ~]$ 
https://129.100.253.103:28080/


Configuring authentication for the monitor console

我们在执行gpperfmon_install的时候,系统帮我们创建了监控数据库的role,默认是gpmon。这个角色是使用md5加密认证来连接greenplum数据库的,如果有一天,你的gpmon连接不上了,我们可以使用gpadmin用户来连接,配置如下
1,确定gpadmin用户已经使用md5加密认证
=#alter role gpadmin with encrypted password ‘greenplum’;
2,编辑 $MASTER_DATA_DIRECTORY/pg_hba.conf,使gpadmin能够连接数据库
host gpperfmon all 0.0.0.0/0 md5
3,在gpadmin用户目录下面的.pgpass文件里面添加这样一条信息
*.5432:gpperfmon:gpadmin:greenplum
4,保存退出,重新加载greenplum
$gpstop –u

[gpadmin@mdw gpseg-1]$ tail -50f pg_hba.conf

# of significant bits in the mask.  Alternatively, you can write an IP
# address and netmask in separate columns to specify the set of hosts.
# Instead of a CIDR-address, you can write "samehost" to match any of
# the server's own IP addresses, or "samenet" to match any address in
# any subnet that the server is directly connected to.
#
# METHOD can be "trust", "reject", "md5", "password", "gss", "sspi",
# "krb5", "ident", "pam", "ldap", "radius" or "cert".  Note that
# "password" sends passwords in clear text; "md5" is preferred since
# it sends encrypted passwords.
#
# OPTIONS are a set of options for the authentication in the format
# NAME=VALUE.  The available options depend on the different
# authentication methods -- refer to the "Client Authentication"
# section in the documentation for a list of which options are
# available for which authentication methods.
#
# Database and user names containing spaces, commas, quotes and other
# special characters must be quoted.  Quoting one of the keywords
# "all", "sameuser", "samerole" or "replication" makes the name lose
# its special character, and just match a database or username with
# that name.
#
# This file is read on server startup and when the postmaster receives
# a SIGHUP signal.  If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect.  You can
# use "pg_ctl reload" to do that.
# Put your actual configuration here
# ----------------------------------
#
# If you want to allow non-local connections, you need to add more
# "host" records.  In that case you will also need to make PostgreSQL
# listen on a non-local interface via the listen_addresses
# configuration parameter, or via the -i or -h command line switches.
# CAUTION: Configuring the system for local "trust" authentication allows
# any local user to connect as any PostgreSQL user, including the database
# superuser. If you do not trust all your local users, use another
# authentication method.
# TYPE  DATABASE    USER        CIDR-ADDRESS          METHOD
# "local" is for Unix domain socket connections only
# IPv4 local connections:
# IPv6 local connections:
local    all         gpadmin         ident
host     all         gpadmin         127.0.0.1/28    trust
host     all         gpadmin         129.100.253.103/32       trust
host     all         all             0/0 md5 
local    gpperfmon   gpmon           md5
host     all         gpmon           127.0.0.1/28    md5
host     gpperfmon   all             0.0.0.0/0       md5
 


Uninstalling greenplum performance monitor

1,停止数据库监控程序perfmon,并删除安装文件目录
$gpperfmon –stop
$rm -rf /usr/local/greenplum-perfmon-web-4.1.X.X
2,编辑postgresql.conf文件,禁用greenplum的perfmon功能
#su – gpadmin
$vi $MASTER_DATA_DIRECTORY/postgresql.conf
gp_enable_gpperfmon = off
3,编辑gp_hba.conf
# local gpperfmon gpmon md5
# host gpperfmon gpmon 0.0.0.0/0 md5
4,删除role和数据库
$psql template1 -c ‘drop role gpmon’
$dropdb gpperfmon
5,最后将gpperfmon的相关安装文件删除
$rm -rf $MASTER_DATA_DIRECTORY/gpperfmon/data/*
$rm -rf $MASTER_DATA_DIRECTORY/gpperfmon/data/*
重启数据库
$gpstop
$gpstart

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值