【数据库】PG数据库状态巡检

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档


前言

以下为PG数据库状态检测常用的方法,未包括安全相关内容,后续持续完善


一、环境变量

请将以下变量修改为与当前环境一致, 并且确保使用这个配置连接任何数据库都不需要输入密码

export PGHOST=127.0.0.1
export PGPORT=54321
export PGDATABASE=test
export PGUSER=system
export PGPASSWORD=12345678ab
export PGDATA=/home/jp/database/ES/data
export PGHOME=/opt/pgsql

export PATH=$PGHOME/bin:$PATH:.
export DATE=`date +"%Y%m%d%H%M"`
export LD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH
# 记住当前目录
PWD=`pwd`

# 获取postgresql日志目录
pg_log_dir=`grep '^\ *[a-z]' $PGDATA/postgresql.conf|awk -F "#" '{print $1}'|grep log_directory|awk -F "=" '{print $2}'`

二、数据库信息

2.1 基本信息

"--数据库版本: "
select version();

"--用户已安装的插件版本: "
select current_database(),* from pg_extension;

"--用户使用了多少种数据类型: "
select current_database(),b.typname,count(*) from pg_attribute a,pg_type b where a.atttypid=b.oid and a.attrelid in (select oid 
from pg_class 
where relnamespace not in 
(select oid from pg_namespace where nspname ~ $$^pg_$$ or nspname=$$information_schema$$)) 
group by 1,2 order by 3 desc;

"--用户创建了多少对象: "
select current_database(),rolname,nspname,relkind,count(*) 
from pg_class a,pg_authid b,pg_namespace c 
where a.relnamespace=c.oid and a.relowner=b.oid and nspname !~ $$^pg_$$ and nspname<>$$information_schema$$ 
group by 1,2,3,4 order by 5 desc;

"-- 用户对象占用空间的柱状图: "
select current_database(),buk this_buk_no,cnt rels_in_this_buk,pg_size_pretty(min) buk_min,pg_size_pretty(max) buk_max from( select row_number() over (partition by buk order by tsize),tsize,buk,min(tsize) over (partition by buk),max(tsize) over (partition by buk),count(*) over (partition by buk) cnt 
from ( select pg_relation_size(a.oid) tsize, width_bucket(pg_relation_size(a.oid),tmin-1,tmax+1,10) buk 
from (select min(pg_relation_size(a.oid)) tmin,max(pg_relation_size(a.oid)) tmax 
from pg_class a,pg_namespace c where a.relnamespace=c.oid and nspname !~ $$^pg_$$ and nspname<>$$information_schema$$) t, pg_class a,pg_namespace c where a.relnamespace=c.oid and nspname !~ $$^pg_$$ and nspname<>$$information_schema$$ ) t)t 
where row_number=1;

 "--当前用户的操作系统定时任务: "
echo "I am `whoami`"
crontab -l
"建议: "
"    仔细检查定时任务的必要性, 以及定时任务的成功与否的评判标准, 以及监控措施. "
"    请以启动数据库的OS用户执行本脚本. "

2.2 配置文件信息

"--获取pg_hba.conf md5值: "
md5sum $PGDATA/pg_hba.conf
"建议: "
"    主备md5值一致(判断主备配置文件是否内容一致的一种手段, 或者使用diff)."

"--获取pg_hba.conf配置: "
grep '^\ *[a-z]' $PGDATA/pg_hba.conf
"建议: "
"    主备配置尽量保持一致, 注意trust和password认证方法的危害(password方法 验证时网络传输密码明文, 建议改为md5), 建议除了unix socket可以使用trust以外, 其他都使用md5或者LDAP认证方法."
"    建议先设置白名单(超级用户允许的来源IP, 可以访问的数据库), 再设置黑名单(不允许超级用户登陆, reject), 再设置白名单(普通应用), 参考pg_hba.conf中的描述. "

"--获取postgresql.conf md5值: "
md5sum $PGDATA/postgresql.conf
"建议: "
"    主备md5值一致(判断主备配置文件是否内容一致的一种手段, 或者使用diff)."

"--获取postgresql.conf配置: "
grep '^\ *[a-z]' $PGDATA/postgresql.conf|awk -F "#" '{print $1}'

"--用户或数据库级别定制参数: "
select * from pg_db_role_setting;
"建议: "
"    定制参数需要关注, 优先级高于数据库的启动参数和配置文件中的参数, 特别是排错时需要关注. "

2.3 数据库错误日志分析

2.3.1 错误日志信息

cat *.csv | grep -E "^[0-9]" | grep -E "WARNING|ERROR|FATAL|PANIC" | awk -F "," '{print $12" , "$13" , "$14}'|sort|uniq -c|sort -rn

"建议: "
"    参考 http://www.postgresql.org/docs/current/static/errcodes-appendix.html ."

2.3.2 连接请求情况

find . -name "*.csv" -type f -mtime -28 -exec grep "connection authorized" {} +|awk -F "," '{print $2,$3,$5}'|sed 's/\:[0-9]*//g'|sort|uniq -c|sort -n -r
"建议: "
"    连接请求非常多时, 请考虑应用层使用连接池, 或者使用pgbouncer连接池. "

2.3.3 获取认证失败情况

find . -name "*.csv" -type f -mtime -28 -exec grep "password authentication failed" {} +|awk -F "," '{print $2,$3,$5}'|sed 's/\:[0-9]*//g'|sort|uniq -c|sort -n -r
"建议: "
"    认证失败次数很多时, 可能是有用户在暴力破解, 建议使用auth_delay插件防止暴力破解. "

2.4 数据库慢SQL日志分析

2.4.1 慢查询统计

cat *.csv|awk -F "," '{print $1" "$2" "$3" "$8" "$14}' |grep "duration:"|grep -v "plan:"|awk '{print $1" "$4" "$5" "$6}'|sort|uniq -c|sort -rn

"建议: "
"    输出格式(条数,日期,用户,数据库,QUERY,耗时ms). "
"    慢查询反映执行时间超过log_min_duration_statement的SQL, 可以根据实际情况分析数据库或SQL语句是否有优化空间. "

2.4.2 慢查询分布10条的执行时间, ms

"--慢查询分布头10条的执行时间, ms: "
cat *.csv|awk -F "," '{print $1" "$2" "$3" "$8" "$14}' |grep "duration:"|grep -v "plan:"|awk '{print $1" "$4" "$5" "$6" "$7" "$8}'|sort -k 6 -n|head -n 10

"--慢查询分布尾10条的执行时间, ms: "
cat *.csv|awk -F "," '{print $1" "$2" "$3" "$8" "$14}' |grep "duration:"|grep -v "plan:"|awk '{print $1" "$4" "$5" "$6" "$7" "$8}'|sort -k 6 -n|tail -n 10

2.4.3 auto_explain 分析统计

cat *.csv|awk -F "," '{print $1" "$2" "$3" "$8" "$14}' |grep "plan:"|grep "duration:"|awk '{print $1" "$4" "$5" "$6}'|sort|uniq -c|sort -rn

"建议: "
"    输出格式(条数,日期,用户,数据库,QUERY). "
"    慢查询反映执行时间超过auto_explain.log_min_duration的SQL, 可以根据实际情况分析数据库或SQL语句是否有优化空间, 分析csvlog中auto_explain的输出可以了解语句超时时的执行计划详情. "

2.5 数据库空间使用分析

在这里插入图片描述

"-- 文件系统剩余空间: "
df -m

"--表空间对应目录: "
ls -la $PGDATA/pg_tblspc/
"建议: "
"    注意表空间如果不是软链接, 注意是否刻意所为, 正常情况下应该是软链接. "

"--表空间使用情况: "
select spcname,pg_tablespace_location(oid),pg_size_pretty(pg_tablespace_size(oid)) from pg_tablespace order by pg_tablespace_size(oid) desc;
"建议: "
"    注意检查表空间所在文件系统的剩余空间, (默认表空间在$PGDATA/base目录下), IOPS分配是否均匀, OS的sysstat包可以观察IO使用率. "

"--数据库使用情况: "
select datname,pg_size_pretty(pg_database_size(oid)) from pg_database order by pg_database_size(oid) desc;
"建议: "
"    注意检查数据库的大小, 是否需要清理历史数据. "

"--TOP 10 size对象: "
select current_database(),b.nspname,c.relname,c.relkind,pg_size_pretty(pg_relation_size(c.oid)),a.seq_scan,a.seq_tup_read,a.idx_scan,a.idx_tup_fetch,a.n_tup_ins,a.n_tup_upd,a.n_tup_del,a.n_tup_hot_upd,a.n_live_tup,a.n_dead_tup 
from pg_stat_all_tables a, pg_class c,pg_namespace b 
where c.relnamespace=b.oid and c.relkind=$$r$$ and a.relid=c.oid 
order by pg_relation_size(c.oid) desc limit 10;
"建议: "
"    经验值: 单表超过8GB, 并且这个表需要频繁更新 或 删除+插入的话, 建议对表根据业务逻辑进行合理拆分后获得更好的性能, 以及便于对膨胀索引进行维护; 如果是只读的表, 建议适当结合SQL语句进行优化. "

2.7 数据库连接分析

"--当前活跃度: "
select now(),state,count(*) from pg_stat_activity group by 1,2;

"建议: "
"    如果active状态很多, 说明数据库比较繁忙. 如果idle in transaction很多, 说明业务逻辑设计可能有问题. 如果idle很多, 可能使用了连接池, 并且可能没有自动回收连接到连接池的最小连接数. "

"--总剩余连接数: "
select max_conn,used,res_for_super,max_conn-used-res_for_super res_for_normal from (select count(*) used from pg_stat_activity) t1,(select setting::int res_for_super from pg_settings where name=$$superuser_reserved_connections$$) t2,(select setting::int max_conn from pg_settings where name=$$max_connections$$) t3;

"建议: "
"    给超级用户和普通用户设置足够的连接, 以免不能登录数据库. "

 "--用户连接数限制: "
select a.rolname,a.rolconnlimit,b.connects from pg_authid a,(select usename,count(*) connects from pg_stat_activity group by usename) b where a.rolname=b.usename order by b.connects desc;

"建议: "
"    给用户设置足够的连接数, alter role ... CONNECTION LIMIT . "

"--数据库连接限制: "
select a.datname, a.datconnlimit, b.connects from pg_database a,(select datname,count(*) connects from pg_stat_activity group by datname) b where a.datname=b.datname order by b.connects desc;

"建议: "
"    给数据库设置足够的连接数, alter database ... CONNECTION LIMIT . "

2.8 数据库性能分析

2.8.1 TOP 5 SQL : total_cpu_time

select c.rolname,b.datname,a.total_time/a.calls per_call_time,a.* 
from pg_stat_statements a,pg_database b,pg_authid c 
where a.userid=c.oid and a.dbid=b.oid 
order by a.total_time desc limit 5;

"建议: "
"    检查SQL是否有优化空间, 配合auto_explain插件在csvlog中观察LONG SQL的执行计划是否正确. "

2.8.2 索引数超过4并且SIZE大于10MB的表

select current_database(), t2.nspname, t1.relname, pg_size_pretty(pg_relation_size(t1.oid)), t3.idx_cnt 
from pg_class t1, pg_namespace t2, 
(select indrelid,count(*) idx_cnt from pg_index group by 1 having count(*)>4) t3 
where t1.oid=t3.indrelid and t1.relnamespace=t2.oid and pg_relation_size(t1.oid)/1024/1024.0>10 
order by t3.idx_cnt desc;

"建议: "
"    索引数量太多, 影响表的增删改性能, 建议检查是否有不需要的索引. "

2.8.3 数据库统计信息, 回滚比例, 命中比例, 数据块读写时间, 死锁, 复制冲突

在这里插入图片描述

select datname,round(100*(xact_rollback::numeric/(case when xact_commit > 0 then xact_commit else 1 end + xact_rollback)),2)||$$ %$$ rollback_ratio,
round(100*(blks_hit::numeric/(case when blks_read>0 then blks_read else 1 end + blks_hit)),2)||$$ %$$ hit_ratio, blk_read_time, blk_write_time, conflicts, deadlocks 
from pg_stat_database;

"建议: "
"    回滚比例大说明业务逻辑可能有问题, 命中率小说明shared_buffer要加大"
" 	 数据块读写时间长说明块设备的IO性能要提升, 死锁次数多说明业务逻辑有问题"
" 	 复制冲突次数多说明备库可能在跑LONG SQL"

2.8.3 检查点, bgwriter 统计信息

select * from pg_stat_bgwriter;

"建议: "
"    checkpoint_write_time多说明检查点持续时间长, 检查点过程中产生了较多的脏页. "
"    checkpoint_sync_time代表检查点开始时的shared buffer中的脏页被同步到磁盘的时间, 如果时间过长, 并且数据库在检查点时性能较差, 考虑一下提升块设备的IOPS能力. "
"    buffers_backend_fsync太多说明需要加大shared buffer 或者 减小bgwriter_delay参数. "

2.9 数据库垃圾分析

2.9.1 表膨胀检查

在这里插入图片描述

SELECT
  current_database() AS db, schemaname, tablename, reltuples::bigint AS tups, relpages::bigint AS pages, otta,
  ROUND(CASE WHEN otta=0 OR sml.relpages=0 OR sml.relpages=otta THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat,
  CASE WHEN relpages < otta THEN 0 ELSE relpages::bigint - otta END AS wastedpages,
  CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes,
  CASE WHEN relpages < otta THEN $$0 bytes$$::text ELSE (bs*(relpages-otta))::bigint || $$ bytes$$ END AS wastedsize,
  iname, ituples::bigint AS itups, ipages::bigint AS ipages, iotta,
  ROUND(CASE WHEN iotta=0 OR ipages=0 OR ipages=iotta THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat,
  CASE WHEN ipages < iotta THEN 0 ELSE ipages::bigint - iotta END AS wastedipages,
  CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes,
  CASE WHEN ipages < iotta THEN $$0 bytes$$ ELSE (bs*(ipages-iotta))::bigint || $$ bytes$$ END AS wastedisize,
  CASE WHEN relpages < otta THEN
    CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta::bigint) END
    ELSE CASE WHEN ipages < iotta THEN bs*(relpages-otta::bigint)
      ELSE bs*(relpages-otta::bigint + ipages-iotta::bigint) END
  END AS totalwastedbytes
FROM (
  SELECT
    nn.nspname AS schemaname,
    cc.relname AS tablename,
    COALESCE(cc.reltuples,0) AS reltuples,
    COALESCE(cc.relpages,0) AS relpages,
    COALESCE(bs,0) AS bs,
    COALESCE(CEIL((cc.reltuples*((datahdr+ma-
      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)),0) AS otta,
    COALESCE(c2.relname,$$?$$) AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,
    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols
  FROM
     pg_class cc
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> $$information_schema$$
  LEFT JOIN
  (
    SELECT
      ma,bs,foo.nspname,foo.relname,
      (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,
      (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2
    FROM (
      SELECT
        ns.nspname, tbl.relname, hdr, ma, bs,
        SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth,
        MAX(coalesce(null_frac,0)) AS maxfracsum,
        hdr+(
          SELECT 1+count(*)/8
          FROM pg_stats s2
          WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname
        ) AS nullhdr
      FROM pg_attribute att 
      JOIN pg_class tbl ON att.attrelid = tbl.oid
      JOIN pg_namespace ns ON ns.oid = tbl.relnamespace 
      LEFT JOIN pg_stats s ON s.schemaname=ns.nspname
      AND s.tablename = tbl.relname
      AND s.inherited=false
      AND s.attname=att.attname,
      (
        SELECT
          (SELECT current_setting($$block_size$$)::numeric) AS bs,
            CASE WHEN SUBSTRING(SPLIT_PART(v, $$ $$, 2) FROM $$#"[0-9]+.[0-9]+#"%$$ for $$#$$)
              IN ($$8.0$$,$$8.1$$,$$8.2$$) THEN 27 ELSE 23 END AS hdr,
          CASE WHEN v ~ $$mingw32$$ OR v ~ $$64-bit$$ THEN 8 ELSE 4 END AS ma
        FROM (SELECT version() AS v) AS foo
      ) AS constants
      WHERE att.attnum > 0 AND tbl.relkind=$$r$$
      GROUP BY 1,2,3,4,5
    ) AS foo
  ) AS rs
  ON cc.relname = rs.relname AND nn.nspname = rs.nspname
  LEFT JOIN pg_index i ON indrelid = cc.oid
  LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
) AS sml order by wastedbytes desc limit 5;

"建议: "
"    根据浪费的字节数, 设置合适的autovacuum_vacuum_scale_factor, 大表如果频繁的有更新或删除和插入操作, 建议设置较小的autovacuum_vacuum_scale_factor来降低浪费空间. "
"    同时还需要打开autovacuum, 根据服务器的内存大小, CPU核数, 设置足够大的autovacuum_work_mem 或 autovacuum_max_workers 或 maintenance_work_mem, 以及足够小的 autovacuum_naptime . "
"    同时还需要分析是否对大数据库使用了逻辑备份pg_dump, 系统中是否经常有长SQL, 长事务. 这些都有可能导致膨胀. "
"    使用pg_reorg或者vacuum full可以回收膨胀的空间. "
"    otta评估出的表实际需要页数, iotta评估出的索引实际需要页数 "
"    bs数据库的块大小; "
"    tbloat表膨胀倍数, ibloat索引膨胀倍数, wastedpages表浪费了多少个数据块, wastedipages索引浪费了多少个数据块; "
"    wastedbytes表浪费了多少字节, wastedibytes索引浪费了多少字节; "

2.9.2 索引膨胀检查

在这里插入图片描述

SELECT
  current_database() AS db, schemaname, tablename, reltuples::bigint AS tups, relpages::bigint AS pages, otta,
  ROUND(CASE WHEN otta=0 OR sml.relpages=0 OR sml.relpages=otta THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat,
  CASE WHEN relpages < otta THEN 0 ELSE relpages::bigint - otta END AS wastedpages,
  CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes,
  CASE WHEN relpages < otta THEN $$0 bytes$$::text ELSE (bs*(relpages-otta))::bigint || $$ bytes$$ END AS wastedsize,
  iname, ituples::bigint AS itups, ipages::bigint AS ipages, iotta,
  ROUND(CASE WHEN iotta=0 OR ipages=0 OR ipages=iotta THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat,
  CASE WHEN ipages < iotta THEN 0 ELSE ipages::bigint - iotta END AS wastedipages,
  CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes,
  CASE WHEN ipages < iotta THEN $$0 bytes$$ ELSE (bs*(ipages-iotta))::bigint || $$ bytes$$ END AS wastedisize,
  CASE WHEN relpages < otta THEN
    CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta::bigint) END
    ELSE CASE WHEN ipages < iotta THEN bs*(relpages-otta::bigint)
      ELSE bs*(relpages-otta::bigint + ipages-iotta::bigint) END
  END AS totalwastedbytes
FROM (
  SELECT
    nn.nspname AS schemaname,
    cc.relname AS tablename,
    COALESCE(cc.reltuples,0) AS reltuples,
    COALESCE(cc.relpages,0) AS relpages,
    COALESCE(bs,0) AS bs,
    COALESCE(CEIL((cc.reltuples*((datahdr+ma-
      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)),0) AS otta,
    COALESCE(c2.relname,$$?$$) AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,
    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols
  FROM
     pg_class cc
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> $$information_schema$$
  LEFT JOIN
  (
    SELECT
      ma,bs,foo.nspname,foo.relname,
      (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,
      (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2
    FROM (
      SELECT
        ns.nspname, tbl.relname, hdr, ma, bs,
        SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth,
        MAX(coalesce(null_frac,0)) AS maxfracsum,
        hdr+(
          SELECT 1+count(*)/8
          FROM pg_stats s2
          WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname
        ) AS nullhdr
      FROM pg_attribute att 
      JOIN pg_class tbl ON att.attrelid = tbl.oid
      JOIN pg_namespace ns ON ns.oid = tbl.relnamespace 
      LEFT JOIN pg_stats s ON s.schemaname=ns.nspname
      AND s.tablename = tbl.relname
      AND s.inherited=false
      AND s.attname=att.attname,
      (
        SELECT
          (SELECT current_setting($$block_size$$)::numeric) AS bs,
            CASE WHEN SUBSTRING(SPLIT_PART(v, $$ $$, 2) FROM $$#"[0-9]+.[0-9]+#"%$$ for $$#$$)
              IN ($$8.0$$,$$8.1$$,$$8.2$$) THEN 27 ELSE 23 END AS hdr,
          CASE WHEN v ~ $$mingw32$$ OR v ~ $$64-bit$$ THEN 8 ELSE 4 END AS ma
        FROM (SELECT version() AS v) AS foo
      ) AS constants
      WHERE att.attnum > 0 AND tbl.relkind=$$r$$
      GROUP BY 1,2,3,4,5
    ) AS foo
  ) AS rs
  ON cc.relname = rs.relname AND nn.nspname = rs.nspname
  LEFT JOIN pg_index i ON indrelid = cc.oid
  LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
) AS sml order by wastedibytes desc limit 5;

"建议: "
"    如果索引膨胀太大, 会影响性能, 建议重建索引, create index CONCURRENTLY ... . "

2.9.3 垃圾数据

select current_database(),schemaname,relname,n_dead_tup 
from pg_stat_all_tables 
where n_live_tup>0 and n_dead_tup/n_live_tup>0.2 and schemaname not in ($$pg_toast$$,$$pg_catalog$$) 
order by n_dead_tup 
desc limit 5;
"建议: "
"    通常垃圾过多, 可能是因为无法回收垃圾, 或者回收垃圾的进程繁忙或没有及时唤醒, 或者没有开启autovacuum, 或在短时间内产生了大量的垃圾 . "
"    可以等待autovacuum进行处理, 或者手工执行vacuum table . "

2.9.4 长事务,2PC

select datname,usename,query,xact_start,now()-xact_start xact_duration,query_start,now()-query_start query_duration,state from pg_stat_activity where state<>$$idle$$ and (backend_xid is not null or backend_xmin is not null) and now()-xact_start > interval $$30 min$$ order by xact_start;

select name,statement,prepare_time,now()-prepare_time,parameter_types,from_sql from pg_prepared_statements where now()-prepare_time > interval $$30 min$$ order by prepare_time;

"建议: "
"    长事务过程中产生的垃圾, 无法回收, 建议不要在数据库中运行LONG SQL, 或者错开DML高峰时间去运行LONG SQL. 2PC事务一定要记得尽快结束掉, 否则可能会导致数据库膨胀. "

2.10 数据库XLOG, 流复制状态分析

"--是否开启归档, 自动垃圾回收"
select name,setting from pg_settings where name in ($$archive_mode$$,$$autovacuum$$,$$archive_command$$);
"建议: "
"    建议开启自动垃圾回收, 开启归档. "

"--归档统计信息"
select pg_xlogfile_name(pg_current_xlog_location()) now_xlog, * from pg_stat_archiver;
"建议: "
"    如果当前的XLOG文件和最后一个归档失败的XLOG文件之间相差很多个文件, 建议尽快排查归档失败的原因, 以便修复, 否则pg_xlog目录可能会撑爆. "

"--流复制统计信息"
select pg_xlog_location_diff(pg_current_xlog_location(),flush_location), * from pg_stat_replication;
"建议: "
"    关注流复制的延迟, 如果延迟非常大, 建议排查网络带宽, 以及本地读xlog的性能, 远程写xlog的性能. "

"--流复制插槽"
select pg_xlog_location_diff(pg_current_xlog_location(),restart_lsn), * from pg_replication_slots;
"建议: "
"    如果restart_lsn和当前XLOG相差非常大的字节数, 需要排查slot的订阅者是否能正常接收XLOG, 或者订阅者是否正常. 长时间不将slot的数据取走, pg_xlog目录可能会撑爆. "

三、操作系统信息

 "--主机名:"
hostname -s
 "--以太链路信息:"
ip link show
"--IP地址信息: "
ip addr show
"--路由信息: "
ip route show
"--操作系统内核: "
uname -a
"--内存(MB): "
free -m
 "--CPU: "
lscpu
 "--块设备: "
lsblk
"--拓扑: "
lstopo-no-graphics
"--进程树: "
pstree -a -A -c -l -n -p -u -U -Z
"--操作系统配置文件 静态配置信息: "
"--/etc/sysctl.conf "
grep "^[a-z]" /etc/sysctl.conf
"--/etc/security/limits.conf "
grep -v "^#" /etc/security/limits.conf|grep -v "^$"
"--/etc/security/limits.d/*.conf "
for dir in `ls /etc/security/limits.d`; do echo "/etc/security/limits.d/$dir : "; grep -v "^#" /etc/security/limits.d/$dir|grep -v "^$"; done 
"--/etc/sysconfig/iptables "
cat /etc/sysconfig/iptables
"--/etc/fstab "
cat /etc/fstab
"--/etc/rc.local "
cat /etc/rc.local
"--/etc/selinux/config "
cat /etc/selinux/config
"--boot/grub/grub.conf "
cat /boot/grub/grub.conf
"--/var/spool/cron 用户cron配置 "
for dir in `ls /var/spool/cron`; do echo "/var/spool/cron/$dir : "; cat /var/spool/cron/$dir; done 
"--chkconfig --list "
chkconfig --list
"--iptables -L -v -n -t filter 动态配置信息: "
iptables -L -v -n -t filter
"--iptables -L -v -n -t nat 动态配置信息: "
iptables -L -v -n -t nat
"--iptables -L -v -n -t mangle 动态配置信息: "
iptables -L -v -n -t mangle
"--iptables -L -v -n -t raw 动态配置信息: "
iptables -L -v -n -t raw
"--sysctl -a 动态配置信息: "
sysctl -a
"--mount 动态配置信息: "
mount -l
"--selinux 动态配置信息: "
getsebool
sestatus
"--建议禁用Transparent Huge Pages (THP): "
cat /sys/kernel/mm/transparent_hugepage/enabled
cat /sys/kernel/mm/transparent_hugepage/defrag
cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
cat /sys/kernel/mm/redhat_transparent_hugepage/defrag
 "--硬盘SMART信息(需要root): "
smartctl --scan|awk -F "#" '{print $1}' | while read i; do echo -e "\n\nDEVICE $i"; smartctl -a $i; done
"--/var/log/boot.log "
cat /var/log/boot.log
"--/var/log/cron(需要root) "
cat /var/log/cron
"--/var/log/dmesg "
cat /var/log/dmesg
"--/var/log/messages(需要root) "
tail -n 500 /var/log/messages
"--/var/log/secure(需要root) "
cat /var/log/secure
"--/var/log/wtmp "
who -a /var/log/wtmp

  • 10
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值