Postgres2015全国用户大会将于11月20至21日在北京丽亭华苑酒店召开。本次大会嘉宾阵容强大,国内顶级PostgreSQL数据库专家将悉数到场,并特邀欧洲、俄罗斯、日本、美国等国家和地区的数据库方面专家助阵:
- Postgres-XC项目的发起人铃木市一(SUZUKI Koichi)
- Postgres-XL的项目发起人Mason Sharp
- pgpool的作者石井达夫(Tatsuo Ishii)
- PG-Strom的作者海外浩平(Kaigai Kohei)
- Greenplum研发总监姚延栋
- 周正中(德哥), PostgreSQL中国用户会创始人之一
- 汪洋,平安科技数据库技术部经理
- ……
|
|
[更新]
已使用pg_upgrade顺利将一个8TB的生产数据库(包含表, 索引, 类型, 函数, 外部对象等对象大概10万个)从9.3升级到9.4, 升级比较快(约2分钟), 因为数据库较大后期
analyze的
时间比较长, 不过你可以将常用的表优先analyze一下, 就可以放心大胆的提供服务了.
PostgreSQL 9.4于昨天(2014-12-18)正式发布, 为了让大家可以快速的享受9.4带来的强大特性, 写一篇使用zfs和pg_upgrade升级9.4的快速可靠的文章. 希望对大家有帮助.
提醒:
在正式升级9.4前, 请做好功课, 至少release note要阅读一遍, 特别是兼容性. 例如有些应用可能用了某些9.4不兼容的语法或者插件的话, 需要解决了再上. (以前就有出现过版本升级带来的bytea的默认表述变更导致的程序异常)
pg_upgrade支持从8.3.x以及更新的版本的跨大版本升级, 使用LINK模式, 可以减少数据的拷贝工作, 大大提高版本升级的速度.
本文将演示一下使用pg_upgrade将数据库从9.3.5升级到最新的9.4.
使用zfs快照来保存老的数据文件和软件. 如果升级失败, 回滚非常简单, 回退到ZFS快照或者使用ZFS快照克隆都可以.
升级步骤简介 :
假设主机已是基于ZFS
停库
创建快照
使用upgrade升级
假设主机不是基于ZFS
创建ZFS主机
创建standby
主备角色切换
以下基于新的主
停主
创建快照
使用upgrade升级
如何把老版本的standby升级成为9.4 standby?
pg start backup
rsync 数据文件
pg_stop_backup
创建recovery.conf 继续.
使用ZFS和pg_upgrade升级9.4的详细步骤 :
以CentOS 7 x64为例,
测试环境部署
安装zfs
http://download.fedoraproject.org/pub/epel 找到最新的epel7 rpm包, 加入YUM仓库.
例如当下版本如下 :
[root@localhost ~]# yum localinstall --nogpgcheck http://ftp.cuhk.edu.hk/pub/linux/fedora-epel/7/x86_64/e/epel-release-7-5.noarch.rpm[root@localhost ~]# yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm[root@localhost ~]# uname -r3.10.0-123.el7.x86_64[root@localhost ~]# yum install kernel-devel-3.10.0-123.el7 zfs
安装好ZFS后, 创建ZPOOL, 我们使用5个文件来模拟5块磁盘.
[root@localhost disks]# dd if=/dev/zero of=./disk1 bs=8192k count=1024 oflag=direct[root@localhost disks]# dd if=/dev/zero of=./disk2 bs=8192k count=1024 oflag=direct[root@localhost disks]# dd if=/dev/zero of=./disk3 bs=8192k count=1024 oflag=direct[root@localhost disks]# dd if=/dev/zero of=./disk4 bs=8192k count=1024 oflag=direct[root@localhost disks]# dd if=/dev/zero of=./disk5 bs=8192k count=1024 oflag=direct
创建zpool
[root@localhost disks]# zpool create -o ashift=12 zp1 raidz /data01/disks/disk1 /data01/disks/disk2 /data01/disks/disk3 /data01/disks/disk4 /data01/disks/disk5[root@localhost disks]# zpool statuspool: zp1state: ONLINEscan: none requestedconfig:
NAME STATE READ WRITE CKSUMzp1 ONLINE 0 0 0raidz1-0 ONLINE 0 0 0/data01/disks/disk1 ONLINE 0 0 0/data01/disks/disk2 ONLINE 0 0 0/data01/disks/disk3 ONLINE 0 0 0/data01/disks/disk4 ONLINE 0 0 0/data01/disks/disk5 ONLINE 0 0 0
设置zfs默认参数 :
[root@localhost disks]# zfs set atime=off zp1[root@localhost disks]# zfs set compression=lz4 zp1[root@localhost disks]# zfs set canmount=off zp1
接下来,
我们需要规划一下数据库的目录结构.
假设分开5个文件系统来存放.
$PGDATApg_xlogpg_archtbs1tbs2
创建对应的zfs文件系统
[root@localhost disks]# zfs create -o mountpoint=/pgdata01 zp1/pg_root[root@localhost disks]# zfs create -o mountpoint=/pgdata02 zp1/pg_xlog[root@localhost disks]# zfs create -o mountpoint=/pgdata03 zp1/pg_arch[root@localhost disks]# zfs create -o mountpoint=/pgdata04 zp1/tbs1[root@localhost disks]# zfs create -o mountpoint=/pgdata05 zp1/tbs2[root@localhost disks]# df -hzp1/pg_root 32G 256K 32G 1% /pgdata01zp1/pg_xlog 32G 256K 32G 1% /pgdata02zp1/pg_arch 32G 256K 32G 1% /pgdata03zp1/tbs1 32G 256K 32G 1% /pgdata04zp1/tbs2 32G 256K 32G 1% /pgdata05
创建数据目录
[root@localhost ~]# mkdir /pgdata01/pg_root[root@localhost ~]# mkdir /pgdata02/pg_xlog[root@localhost ~]# mkdir /pgdata03/pg_arch[root@localhost ~]# mkdir /pgdata04/tbs1[root@localhost ~]# mkdir /pgdata05/tbs2[root@localhost ~]# chown -R postgres:postgres /pgdata0*/
接下来安装PostgreSQL 9.3.5, 并初始化数据库, 生成测试数据.
[root@localhost soft_bak]# tar -jxvf postgresql-9.3.5.tar.bz2[root@localhost soft_bak]# cd postgresql-9.3.5
注意
在升级到9.4时, 软件的编译参数要一致, 例如
我们这里使用了非默认的数据块, 所以在编译9.4时也需要一致.
[root@localhost soft_bak]# yum -y install glib2 lrzsz sysstat e4fsprogs xfsprogs ntp readline-devel zlib zlib-devel openssl openssl-devel pam-devel libxml2-devel libxslt-devel python-devel tcl-devel gcc make smartmontools flex bison perl perl-devel perl-ExtUtils* OpenIPMI-tools openldap openldap-devel[root@localhost postgresql-9.3.5]# ./configure --prefix=/opt/pgsql9.3.5 --with-pgport=1921 --with-perl --with-tcl --with-python --with-openssl --with-pam --without-ldap --with-libxml --with-libxslt --enable-thread-safety --with-blocksize=32 --with-wal-blocksize=32 && gmake world && gmake install-world[root@localhost postgresql-9.3.5]# ln -s /opt/pgsql9.3.5 /opt/pgsql[root@localhost postgresql-9.3.5]# vi /etc/ld.so.conf/opt/pgsql/lib[root@localhost postgresql-9.3.5]# ldconfig
[root@localhost postgresql-9.3.5]# ldconfig -p|grep /opt/pgsqllibpqwalreceiver.so (libc6,x86-64) => /opt/pgsql/lib/libpqwalreceiver.solibpq.so.5 (libc6,x86-64) => /opt/pgsql/lib/libpq.so.5libpq.so (libc6,x86-64) => /opt/pgsql/lib/libpq.solibpgtypes.so.3 (libc6,x86-64) => /opt/pgsql/lib/libpgtypes.so.3libpgtypes.so (libc6,x86-64) => /opt/pgsql/lib/libpgtypes.solibecpg_compat.so.3 (libc6,x86-64) => /opt/pgsql/lib/libecpg_compat.so.3libecpg_compat.so (libc6,x86-64) => /opt/pgsql/lib/libecpg_compat.solibecpg.so.6 (libc6,x86-64) => /opt/pgsql/lib/libecpg.so.6libecpg.so (libc6,x86-64) => /opt/pgsql/lib/libecpg.so[root@localhost postgresql-9.3.5]# vi /etc/profileexport PATH=/opt/pgsql/bin:$PATH[root@localhost postgresql-9.3.5]# . /etc/profile[root@localhost postgresql-9.3.5]# which psql/opt/pgsql/bin/psql[root@localhost postgresql-9.3.5]# which pg_config/opt/pgsql/bin/pg_config
再安装一个外部插件, 提醒各位在使用pg_upgrade升级时, 也需要在新的版本中编译进去(请使用相同的版本).
如果外部插件不支持PostgreSQL 9.4的话, 那么请在9.3的数据库中先卸载对应的插件(包括里面创建的类型, 函数等有依赖的一切).
我这里以pldebug为例
http://git.postgresql.org/gitweb/?p=pldebugger.git;a=summary[root@localhost soft_bak]# tar -zxvf pldebugger-85d7b3b.tar.gz[root@localhost soft_bak]# mv pldebugger-85d7b3b postgresql-9.3.5/contrib/[root@localhost soft_bak]# cd postgresql-9.3.5/contrib/pldebugger-85d7b3b/[root@localhost pldebugger-85d7b3b]# which pg_config/opt/pgsql/bin/pg_config[root@localhost pldebugger-85d7b3b]# gmake clean[root@localhost pldebugger-85d7b3b]# gmake[root@localhost pldebugger-85d7b3b]# gmake install
初始化数据库
[root@localhost pldebugger-85d7b3b]# useradd postgres# su - postgres$ vi .bash_profileexport PS1="$USER@`/bin/hostname -s`-> "export PGPORT=1921export PGDATA=/pgdata01/pg_rootexport LANG=en_US.utf8export PGHOME=/opt/pgsqlexport LD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATHexport DATE=`date +"%Y%m%d%H%M"`export PATH=$PGHOME/bin:$PATH:.export MANPATH=$PGHOME/share/man:$MANPATHexport PGUSER=postgresexport PGHOST=$PGDATAexport PGDATABASE=postgresalias rm='rm -i'alias ll='ls -lh'$ . ~/.bash_profile
修改权限
[root@localhost pldebugger-85d7b3b]# chown -R postgres:postgres /pgdata0*/*[root@localhost pldebugger-85d7b3b]# chmod -R 700 /pgdata0*/*
初始化数据库
postgres@localhost-> initdb -D $PGDATA -U postgres -E UTF8 --locale=C -W -X /pgdata02/pg_xlog
修改配置文件, 开启归档
vi pg_hba.confhost all all 0.0.0.0/0 md5
vi postgresql.conflisten_addresses = '0.0.0.0' # what IP address(es) to listen on;port = 1921 # (change requires restart)max_connections = 100 # (change requires restart)superuser_reserved_connections = 3 # (change requires restart)unix_socket_directories = '.' # comma-separated list of directoriesunix_socket_permissions = 0700 # begin with 0 to use octal notationtcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds;tcp_keepalives_interval = 10 # TCP_KEEPINTVL, in seconds;tcp_keepalives_count = 10 # TCP_KEEPCNT;shared_buffers = 512MB # min 128kBmaintenance_work_mem = 512MB # min 1MBvacuum_cost_delay = 10 # 0-100 millisecondsvacuum_cost_limit = 10000 # 1-10000 creditsbgwriter_delay = 10ms # 10-10000ms between roundswal_level = hot_standby # minimal, archive, or hot_standbysynchronous_commit = off # synchronization level;wal_buffers = 16384kB # min 32kB, -1 sets based on shared_bufferswal_writer_delay = 10ms # 1-10000 millisecondscheckpoint_segments = 32 # in logfile segments, min 1, 16MB eacharchive_mode = on # allows archiving to be donearchive_command = 'DIR="/pgdata03/pg_arch/`date +%F`";test -d $DIR || mkdir -p $DIR; cp %p $DIR/%f' # command to use to archive a logfile segmentarchive_timeout = 600 # force a logfile segment switch after thiseffective_cache_size = 4096MBlog_destination = 'csvlog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvloglog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,log_file_mode = 0600 # creation mode for log files,log_truncate_on_rotation = on # If on, an existing log file with thelog_checkpoints = onlog_connections = onlog_disconnections = onlog_error_verbosity = verbose # terse, default, or verbose messageslog_lock_waits = on # log lock waits >= deadlock_timeoutlog_statement = 'ddl' # none, ddl, mod, alllog_timezone = 'PRC'autovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions anddatestyle = 'iso, mdy'timezone = 'PRC'lc_messages = 'C' # locale for system error messagelc_monetary = 'C' # locale for monetary formattinglc_numeric = 'C' # locale for number formattinglc_time = 'C' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'
启动数据库
postgres@localhost-> pg_ctl start
创建测试用户
postgres=# create role digoal login encrypted password 'digoal';CREATE ROLE
创建表空间, 数据库
postgres=# create tablespace tbs1 location '/pgdata04/tbs1';CREATE TABLESPACEpostgres=# create tablespace tbs2 location '/pgdata05/tbs2';CREATE TABLESPACEpostgres=# create database digoal template template0 encoding 'UTF8' tablespace tbs1;CREATE DATABASEpostgres=# grant all on database digoal to digoal;GRANTpostgres=# grant all on tablespace tbs1 to digoal;GRANTpostgres=# grant all on tablespace tbs2 to digoal;GRANTpostgres=# \c digoal digoalYou are now connected to database "digoal" as user "digoal".digoal=> create schema digoal;CREATE SCHEMA
创建extension, 用于后面模拟9.3升级到9.4的扩展模块.
postgres=# \c digoal postgresYou are now connected to database "digoal" as user "postgres".digoal=# create extension pldbgapi;CREATE EXTENSION
该扩展会创建一些复合类型和函数
digoal=# \dTList of data typesSchema | Name | Description--------+------------+-------------public | breakpoint |public | frame |public | proxyinfo |public | targetinfo |public | var |(5 rows)
digoal=# \dfList of functionsSchema | Name | Result data type | Argument data types | Type--------+-----------------------------+------------------+------------------------------------------------------------------+--------public | pldbg_abort_target | SETOF boolean | session integer | normalpublic | pldbg_attach_to_port | integer | portnumber integer | normalpublic | pldbg_continue | breakpoint | session integer | normalpublic | pldbg_create_listener | integer | | normalpublic | pldbg_deposit_value | boolean | session integer, varname text, linenumber integer, value text | normalpublic | pldbg_drop_breakpoint | boolean | session integer, func oid, linenumber integer | normalpublic | pldbg_get_breakpoints | SETOF breakpoint | session integer | normalpublic | pldbg_get_proxy_info | proxyinfo | | normalpublic | pldbg_get_source | text | session integer, func oid | normalpublic | pldbg_get_stack | SETOF frame | session integer | normalpublic | pldbg_get_target_info | targetinfo | signature text, targettype "char" | normalpublic | pldbg_get_variables | SETOF var | session integer | normalpublic | pldbg_oid_debug | integer | functionoid oid | normalpublic | pldbg_select_frame | breakpoint | session integer, frame integer | normalpublic | pldbg_set_breakpoint | boolean | session integer, func oid, linenumber integer | normalpublic | pldbg_set_global_breakpoint | boolean | session integer, func oid, linenumber integer, targetpid integer | normalpublic | pldbg_step_into | breakpoint | session integer | normalpublic | pldbg_step_over | breakpoint | session integer | normalpublic | pldbg_wait_for_breakpoint | breakpoint | session integer | normalpublic | pldbg_wait_for_target | integer | session integer | normalpublic | plpgsql_oid_debug | integer | functionoid oid | normal(21 rows)
digoal=# \d breakpointComposite type "public.breakpoint"Column | Type | Modifiers------------+---------+-----------func | oid |linenumber | integer |targetname | text |
创建测试数据表, 函数, 创建在tbs1和tbs2.
digoal=# \c digoal digoalYou are now connected to database "digoal" as user "digoal".digoal=> create table userinfo (id int primary key, info text, crt_time timestamp);CREATE TABLEdigoal=> \d userinfoTable "digoal.userinfo"Column | Type | Modifiers----------+-----------------------------+-----------id | integer | not nullinfo | text |crt_time | timestamp without time zone |Indexes:"userinfo_pkey" PRIMARY KEY, btree (id)
digoal=> alter index userinfo_pkey set tablespace tbs2;ALTER INDEX
digoal=> create or replace function f_digoal(i_id int) returns void as $$declarebeginupdate userinfo set info=$_$Hello,I'm digoal.$_$||md5(random()::text), crt_time=now() where id=i_id;if not found theninsert into userinfo(id,info,crt_time) values(i_id, $_$Hello,I'm digoal.$_$||md5(random()::text), now());end if;return;exception when others thenreturn;end;$$ language plpgsql strict volatile;CREATE FUNCTIONdigoal=> select f_digoal(1);f_digoal----------(1 row)
digoal=> select * from userinfo ;id | info | crt_time----+---------------------------------------------------+----------------------------1 | Hello,I'm digoal.607acd6f0bfe3c48eecde00f1b98ad85 | 2014-12-19 18:48:03.398352(1 row)digoal=> select f_digoal(1);f_digoal----------(1 row)
digoal=> select * from userinfo ;id | info | crt_time----+---------------------------------------------------+----------------------------1 | Hello,I'm digoal.debe361485303d3bac72ea3d9a95aa42 | 2014-12-19 18:48:47.255641(1 row)
生成测试数据
digoal=> insert into userinfo select generate_series(2,10000000),'test',clock_timestamp();INSERT 0 9999999
安装PostgreSQL 9.4, 注意编译参数一致性, 以及内部和外部扩展模块(内部模块gmake world gmake install-world会全部安装).
[root@localhost soft_bak]# wget https://ftp.postgresql.org/pub/source/v9.4.0/postgresql-9.4.0.tar.bz2[root@localhost soft_bak]# tar -jxvf postgresql-9.4.0.tar.bz2[root@localhost soft_bak]# cd postgresql-9.4.0[root@localhost postgresql-9.4.0]# ./configure --prefix=/opt/pgsql9.4.0 --with-pgport=1921 --with-perl --with-tcl --with-python --with-openssl --with-pam --without-ldap --with-libxml --with-libxslt --enable-thread-safety --with-blocksize=32 --with-wal-blocksize=32 && gmake world && gmake install-world
检查安装包含upgrade和upgrade库.
[root@localhost ~]# cd /opt/pgsql9.4.0/lib/[root@localhost lib]# ll|grep upgr-rwxr-xr-x 1 root root 14352 Dec 19 19:12 pg_upgrade_support.so[root@localhost lib]# ll /opt/pgsql9.4.0/bin/pg_upgrade-rwxr-xr-x 1 root root 116368 Dec 19 19:12 /opt/pgsql9.4.0/bin/pg_upgrade
安装pldebug模块, 这个模块可以和9.4兼容.
[root@localhost postgresql-9.4.0]# cd ..[root@localhost soft_bak]# tar -zxvf pldebugger-85d7b3b.tar.gz[root@localhost soft_bak]# mv pldebugger-85d7b3b postgresql-9.4.0/contrib/[root@localhost soft_bak]# cd postgresql-9.4.0/contrib/pldebugger-85d7b3b/[root@localhost pldebugger-85d7b3b]# export PATH=/opt/pgsql9.4.0/bin:$PATH[root@localhost pldebugger-85d7b3b]# which pg_config/opt/pgsql9.4.0/bin/pg_config[root@localhost pldebugger-85d7b3b]# gmake clean[root@localhost pldebugger-85d7b3b]# gmake[root@localhost pldebugger-85d7b3b]# gmake install
如果我们要使用硬链接$PGDATA来加快升级速度的话, 那么新的集群$PGDATA要和老集群的$PGDATA在一个文件系统下.
所以我们使用 /pgdata01/pg_root_9.4 :
[root@localhost lib]# mkdir /pgdata01/pg_root_9.4[root@localhost lib]# chown -R postgres:postgres /pgdata01/pg_root_9.4[root@localhost lib]# chmod 700 /pgdata01/pg_root_9.4
初始化XLOG目录和arch目录(如果使用了定制的pg_log, 则还需初始化pg_log目录, 本例使用的是$PGDATA/pg_log, 所以无需创建pg_log)
[root@localhost lib]# mkdir /pgdata02/pg_xlog_9.4[root@localhost lib]# chown -R postgres:postgres /pgdata02/pg_xlog_9.4[root@localhost lib]# chmod 700 /pgdata02/pg_xlog_9.4[root@localhost lib]# mkdir /pgdata03/pg_arch_9.4[root@localhost lib]# chown -R postgres:postgres /pgdata03/pg_arch_9.4[root@localhost lib]# chmod 700 /pgdata03/pg_arch_9.4
初始化9.4数据库, 注意除xlog,pgdata以为其他初始化参数和9.3一致(超级用户名也要一致) :
[root@localhost lib]# su - postgresLast login: Fri Dec 19 19:23:00 CST 2014 on pts/3postgres@localhost-> /opt/pgsql9.4.0/bin/initdb -D /pgdata01/pg_root_9.4 -X /pgdata02/pg_xlog_9.4 -E UTF8 --locale=C -U postgres -W
配置9.4集群
将pg_hba.conf改为和老实例的9.3一致.
另外, 因为升级需要多次连接新老集群数据库实例, 所以修改为使用本地trust认证.
postgres@localhost-> vi /pgdata01/pg_root/pg_hba.confpostgres@localhost-> vi /pgdata01/pg_root_9.4/pg_hba.conf包含以下即可# "local" is for Unix domain socket connections onlylocal all all trust# IPv4 local connections:host all all 127.0.0.1/32 trust
修改9.4实例的postgresql.conf, 注意使用不同的监听端口. (PostgreSQL 9.4新增了很多功能和参数, 本例一并提供了 )
listen_addresses = '0.0.0.0' # what IP address(es) to listen on;port = 1922 # (change requires restart)max_connections = 100 # (change requires restart)unix_socket_directories = '.' # comma-separated list of directoriesunix_socket_permissions = 0700 # begin with 0 to use octal notationtcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds;tcp_keepalives_interval = 10 # TCP_KEEPINTVL, in seconds;tcp_keepalives_count = 10 # TCP_KEEPCNT;shared_buffers = 512MB # min 128kBhuge_pages = try # on, off, or trymaintenance_work_mem = 512MB # min 1MBautovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_memdynamic_shared_memory_type = posix # the default is the first optionvacuum_cost_delay = 10 # 0-100 millisecondsvacuum_cost_limit = 10000 # 1-10000 creditsbgwriter_delay = 10ms # 10-10000ms between roundswal_level = logical # minimal, archive, hot_standby, or logicalsynchronous_commit = off # synchronization level;wal_buffers = 16384kB # min 32kB, -1 sets based on shared_bufferswal_writer_delay = 10ms # 1-10000 millisecondscheckpoint_segments = 32 # in logfile segments, min 1, 16MB eacharchive_mode = on # allows archiving to be donearchive_command = 'DIR="/pgdata03/pg_arch_9.4/`date +%F`";test -d $DIR || mkdir -p $DIR; cp %p $DIR/%f' # command to use to archive a logfile segmentarchive_timeout = 600 # force a logfile segment switch after thislog_destination = 'csvlog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvloglog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,log_file_mode = 0600 # creation mode for log files,log_truncate_on_rotation = on # If on, an existing log file with thelog_checkpoints = onlog_connections = onlog_disconnections = onlog_error_verbosity = verbose # terse, default, or verbose messageslog_lock_waits = on # log lock waits >= deadlock_timeoutlog_statement = 'ddl' # none, ddl, mod, alllog_timezone = 'PRC'autovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions anddatestyle = 'iso, mdy'timezone = 'PRC'lc_messages = 'C' # locale for system error messagelc_monetary = 'C' # locale for monetary formattinglc_numeric = 'C' # locale for number formattinglc_time = 'C' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'
停库
postgres@localhost-> /opt/pgsql9.3.5/bin/pg_ctl stop -m fast -D /pgdata01/pg_rootpostgres@localhost-> /opt/pgsql9.4.0/bin/pg_ctl stop -m fast -D /pgdata01/pg_root_9.4
创建9.3数据库的文件系统快照
[root@localhost ~]# df -hzp1/pg_root 30G 19M 30G 1% /pgdata01zp1/pg_xlog 31G 428M 30G 2% /pgdata02zp1/pg_arch 31G 636M 30G 3% /pgdata03zp1/tbs1 31G 170M 30G 1% /pgdata04zp1/tbs2 31G 97M 30G 1% /pgdata05[root@localhost ~]# zfs snapshot zp1/pg_root@pg9.3.5[root@localhost ~]# zfs snapshot zp1/pg_xlog@pg9.3.5[root@localhost ~]# zfs snapshot zp1/pg_arch@pg9.3.5[root@localhost ~]# zfs snapshot zp1/tbs1@pg9.3.5[root@localhost ~]# zfs snapshot zp1/tbs2@pg9.3.5[root@localhost ~]# zfs list -t snapshotNAME USED AVAIL REFER MOUNTPOINTzp1/pg_arch@pg9.3.5 0 - 635M -zp1/pg_root@pg9.3.5 0 - 18.4M -zp1/pg_xlog@pg9.3.5 0 - 428M -zp1/tbs1@pg9.3.5 0 - 169M -zp1/tbs2@pg9.3.5 0 - 96.2M -
使用9.4的pg_upgrade检测兼容性
su - postgrespostgres@localhost-> cdpostgres@localhost-> mkdir upgrade_logpostgres@localhost-> cd upgrade_log/postgres@localhost-> /opt/pgsql9.4.0/bin/pg_upgrade -b /opt/pgsql9.3.5/bin -B /opt/pgsql9.4.0/bin -d /pgdata01/pg_root -D /pgdata01/pg_root_9.4 -p 1921 -P 1922 -U postgres -j 8 -k -cPerforming Consistency Checks-----------------------------Checking cluster versions okChecking database user is a superuser okChecking for prepared transactions okChecking for reg* system OID user data types okChecking for contrib/isn with bigint-passing mismatch okChecking for invalid "line" user columns okChecking for presence of required libraries okChecking database user is a superuser okChecking for prepared transactions ok
*Clusters are compatible*
验证兼容性正常, 可以正式升级了.
postgres@localhost-> /opt/pgsql9.4.0/bin/pg_upgrade -b /opt/pgsql9.3.5/bin -B /opt/pgsql9.4.0/bin -d /pgdata01/pg_root -D /pgdata01/pg_root_9.4 -p 1921 -P 1922 -U postgres -j 8 -k -r -v
最后输出如下 :
Upgrade Complete----------------Optimizer statistics are not transferred by pg_upgrade so,once you start the new server, consider running:analyze_new_cluster.sh
Running this script will delete the old cluster's data files:delete_old_cluster.sh
给了2个脚本, 用于收集统计信息和删除老集群.
postgres@localhost-> lltotal 20K-rwx------ 1 postgres postgres 785 Dec 19 19:50 analyze_new_cluster.sh-rwx------ 1 postgres postgres 114 Dec 19 19:50 delete_old_cluster.sh-rw------- 1 postgres postgres 326 Dec 19 19:51 pg_upgrade_internal.log-rw------- 1 postgres postgres 179 Dec 19 19:51 pg_upgrade_server.log-rw------- 1 postgres postgres 179 Dec 19 19:51 pg_upgrade_utility.log
接下来要做的是启动新的数据库集群.
postgres@localhost-> /opt/pgsql9.4.0/bin/pg_ctl start -D /pgdata01/pg_root_9.4
执行统计信息收集脚本, 因为使用pg_upgrade升级的话, 统计信息不会迁移过来. 所以需要手工统计一下.
脚本内容如下
postgres@localhost-> cat analyze_new_cluster.sh#!/bin/sh
echo 'This script will generate minimal optimizer statistics rapidly'echo 'so your system is usable, and then gather statistics twice more'echo 'with increasing accuracy. When it is done, your system will'echo 'have the default level of optimizer statistics.'echo
echo 'If you have used ALTER TABLE to modify the statistics target for'echo 'any tables, you might want to remove them and restore them after'echo 'running this script because they will delay fast statistics generation.'echo
echo 'If you would like default statistics as quickly as possible, cancel'echo 'this script and run:'echo ' "/opt/pgsql9.4.0/bin/vacuumdb" -U "postgres" --all --analyze-only'echo
"/opt/pgsql9.4.0/bin/vacuumdb" -U "postgres" --all --analyze-in-stagesecho
echo 'Done'
但是脚本有点问题, 所以需要我们自行提供连接参数.
postgres@localhost-> "/opt/pgsql9.4.0/bin/vacuumdb" -U "postgres" --all --analyze-only -h /pgdata01/pg_root_9.4 -p 1922 -U postgresvacuumdb: vacuuming database "digoal"vacuumdb: vacuuming database "postgres"vacuumdb: vacuuming database "template1"
升级完成.
----------------------------------------------------------------------------------------------------------------------------------------------
查看数据
postgres@localhost-> /opt/pgsql9.4.0/bin/psql -h 127.0.0.1 -p 1922 -U digoal digoalpsql (9.4.0)Type "help" for help.digoal=> \dtList of relationsSchema | Name | Type | Owner--------+----------+-------+--------digoal | userinfo | table | digoal(1 row)
digoal=> \dxList of installed extensionsName | Version | Schema | Description----------+---------+------------+------------------------------------------------------pldbgapi | 1.0 | public | server-side support for debugging PL/pgSQL functionsplpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language(2 rows)digoal=> select count(*) from userinfo ;count----------10000000(1 row)
最后, 确认升级成功后, 我们可以把老的集群删掉. 删之前也请确认清楚是否真的可以删除, 不要太相信pg_upgrade.
postgres@localhost-> cat delete_old_cluster.sh#!/bin/sh
rm -rf /pgdata01/pg_rootrm -rf /pgdata04/tbs1/PG_9.3_201306121rm -rf /pgdata05/tbs2/PG_9.3_201306121
因为我们用了硬链接, 检查一下就可以删除了, 表空间, $PGDATA都使用了硬链接.
postgres@localhost-> stat PG_9.3_201306121File: ‘PG_9.3_201306121’Size: 3 Blocks: 27 IO Block: 131072 directoryDevice: 26h/38d Inode: 8 Links: 3Access: (0700/drwx------) Uid: ( 1001/postgres) Gid: ( 1001/postgres)Access: 2014-12-19 19:24:39.096630736 +0800Modify: 2014-12-19 19:24:47.718612433 +0800Change: 2014-12-19 19:24:47.718612433 +0800Birth: -postgres@localhost-> du -sh *170M PG_9.3_2013061213.0M PG_9.4_201409291
删除后, 文件统计信息正确
postgres@localhost-> du -sh *170M PG_9.4_201409291
如果要玩回退的话, 以下调整先不要做.
确认要完全使用9.4以后, 我们还需要调整一下/etc/profile, ~/.bash_profile, 方便我们的使用.
rm -f /opt/pgsqlln -s /opt/pgsql9.4.0 /opt/pgsql
vi /etc/profileexport PATH=/opt/pgsql/bin:$PATH
vi /home/postgres/.bash_profileexport PGPORT=1921export PGDATA=/pgdata01/pg_root_9.4
--------------------------------------------------------------------------------------------------------------------------------------------------------------
最后做一下回退测试 :
回退也很简单, 简单的描述一下 :
[root@localhost ~]# zfs clone -o mountpoint=/old_pgdata01 zp1/pg_root@pg9.3.5 zp1/old_pgdata01[root@localhost ~]# zfs clone -o mountpoint=/old_pgdata02 zp1/pg_xlog@pg9.3.5 zp1/old_pgdata02[root@localhost ~]# zfs clone -o mountpoint=/old_pgdata03 zp1/pg_arch@pg9.3.5 zp1/old_pgdata03[root@localhost ~]# zfs clone -o mountpoint=/old_pgdata04 zp1/tbs1@pg9.3.5 zp1/old_pgdata04[root@localhost ~]# zfs clone -o mountpoint=/old_pgdata05 zp1/tbs2@pg9.3.5 zp1/old_pgdata05df -hzp1/old_pgdata01 30G 19M 30G 1% /old_pgdata01zp1/old_pgdata02 31G 428M 30G 2% /old_pgdata02zp1/old_pgdata03 31G 636M 30G 3% /old_pgdata03zp1/old_pgdata04 31G 170M 30G 1% /old_pgdata04zp1/old_pgdata05 30G 97M 30G 1% /old_pgdata05
调整pg_xlog,以及 表空间链接
[root@localhost ~]# su - postgrespostgres@localhost-> cd /old_pgdata01postgres@localhost-> cd pg_rootpostgres@localhost-> lltotal 225Klrwxrwxrwx 1 postgres postgres 17 Dec 19 19:23 pg_xlog -> /pgdata02/pg_xlogpostgres@localhost-> rm -f pg_xlogpostgres@localhost-> ln -s /old_pgdata02/pg_xlog ./postgres@localhost-> cd pg_tblspc/postgres@localhost-> lltotal 1.0Klrwxrwxrwx 1 postgres postgres 14 Dec 19 19:24 16385 -> /pgdata04/tbs1lrwxrwxrwx 1 postgres postgres 14 Dec 19 19:24 16386 -> /pgdata05/tbs2postgres@localhost-> rm -f *postgres@localhost-> ln -s /old_pgdata04/tbs1 ./16385postgres@localhost-> ln -s /old_pgdata05/tbs2 ./16386
修改参数(新老版本不冲突),
archive_command = 'DIR="/old_pgdata03/pg_arch/`date +%F`";test -d $DIR || mkdir -p $DIR; cp %p $DIR/%f'port = 1922|1921
启动old数据库
postgres@localhost-> /opt/pgsql9.3.5/bin/pg_ctl start -D /old_pgdata01/pg_rootserver starting
链接到old数据库测试正常.
postgres@localhost-> /opt/pgsql9.3.5/bin/psql -h 127.0.0.1 -p 1921 -U digoal digoalpsql (9.3.5)Type "help" for help.
digoal=> select count(*) from userinfo ;count----------10000000(1 row)
[参考]
5. pg_upgrade --help
Usage:pg_upgrade [OPTION]...
Options:-b, --old-bindir=BINDIR old cluster executable directory-B, --new-bindir=BINDIR new cluster executable directory-c, --check check clusters only, don't change any data-d, --old-datadir=DATADIR old cluster data directory-D, --new-datadir=DATADIR new cluster data directory-j, --jobs number of simultaneous processes or threads to use-k, --link link instead of copying files to new cluster-o, --old-options=OPTIONS old cluster options to pass to the server-O, --new-options=OPTIONS new cluster options to pass to the server-p, --old-port=PORT old cluster port number (default 50432)-P, --new-port=PORT new cluster port number (default 50432)-r, --retain retain SQL and log files after success-U, --username=NAME cluster superuser (default "postgres")-v, --verbose enable verbose internal logging-V, --version display version information, then exit-?, --help show this help, then exit
Before running pg_upgrade you must:create a new database cluster (using the new version of initdb)shutdown the postmaster servicing the old clustershutdown the postmaster servicing the new cluster
When you run pg_upgrade, you must provide the following information:the data directory for the old cluster (-d DATADIR)the data directory for the new cluster (-D DATADIR)the "bin" directory for the old version (-b BINDIR)the "bin" directory for the new version (-B BINDIR)
For example:pg_upgrade -d oldCluster/data -D newCluster/data -b oldCluster/bin -B newCluster/binor$ export PGDATAOLD=oldCluster/data$ export PGDATANEW=newCluster/data$ export PGBINOLD=oldCluster/bin$ export PGBINNEW=newCluster/bin$ pg_upgrade