我们基于corosync + pacemaker 的集群系统demo版已经完成了。基于这个集群,我们做一些测试。
关于集群组件的介绍请查看:
先前发过的两个文档。
集群的配置信息:
crm(live)configure# show
node node95 ##/* 集群的节点node
node node96
primitive ClusterIp ocf:heartbeat:IPaddr2 \ ##/* 定义一个资源,这里是定义了集群的VIP
params ip="192.168.11.101" cidr_netmask="32" \
op monitor interval="30s" \
meta target-role="Started" is-managed="true"
primitive fence_vm95 stonith:fence_vmware \ ##/* 定义fence 设备这个是针对node95的定义
params ipaddr="192.168.10.197" login="user" passwd="passwd" vmware_datacenter="GZ-Offices" vmware_type="esx" action="reboot" port="dba-test-Cos6.2.64-11.95" pcmk_reboot_action="reboot" pcmk_host_list="node95 node96" \
op monitor interval="20" timeout="60s" \
op start interval="0" timeout="60s" \
meta target-role="Started"
primitive fence_vm96 stonith:fence_vmware \ ##/*定义fence设备,这个是针对node96的定义
params ipaddr="192.168.10.197" login="user" passwd="passwd" vmware_datacenter="GZ-Offices" vmware_type="esx" action="reboot" port="dba-test-Cos6.2.64-11.96" pcmk_reboot_action="reboot" pcmk_host_list="node95 node96" \
op monitor interval="20" timeout="60s" \
op start interval="0" timeout="60s" \
meta target-role="Started"
primitive ping ocf:pacemaker:ping \ ##/* 定义一个资源,对集群的各个节点进行ping 检查网络连通性
params host_list="192.168.11.95 192.168.11.96 192.168.11.101 " multiplier="10" \
op monitor interval="10s" timeout="60s" \
op start interval="0" timeout="60s" \
op stop interval="0" timeout="100s" clone clone-ping ping \
meta target-role="Started"
primitive postgres_res ocf:heartbeat:pgsql \ ##/* 定义数据库资源,我们选用的是PG数据库。
params pgctl="/usr/local/pgsql/bin/pg_ctl" psql="/usr/local/pgsql/bin/psql" start_opt="" pgdata="/usr/local/pgsql/data" config="/usr/local/pgsql/data/postgresql.conf" pgdba="postgres" pgdb="postgres" \
op start interval="0" timeout="120s" \
op stop interval="0" timeout="120s" \
op monitor interval="30s" timeout="30s" depth="0" master-max="2" \
meta target-role="Started" is-managed="true"
location ClusterIp-prefer-to-master ClusterIp 50: node96 ##/* 下面的这些是策略定义
location Pg-prefer-to-master postgres_res 50: node96 ## 要求vip 数据库启动的位置 在node96上,就是说,node96 应该是主库
location cli-prefer-ClusterIp ClusterIp \
rule $id="cli-prefer-rule-ClusterIp" 50: #uname eq node96
location cli-prefer-ping ping \
rule $id="cli-prefer-rule-ping" 50: #uname eq node96
location cli-prefer-postgres_res postgres_res \
rule $id="cli-prefer-rule-postgres_res" 50: #uname eq node96
location loc_fence_vm95 fence_vm95 -inf: node95
location loc_fence_vm96 fence_vm96 -inf: node96
colocation Pg-with-ClusterIp inf: ClusterIp postgres_res ##/* 策略定义 要求vip + 数据库运行在同一个节点上。
order Pg-after-ClusterIp inf: ClusterIp postgres_res ##/* 策略定义,要求vip先启动,然后再启动数据库
property $id="cib-bootstrap-options" \ ##/* 集群的默认属性
dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="true" \
last-lrm-refresh="1347612913" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
(END)
先看系统状态:
============
Last updated: Mon Sep 17 15:28:28 2012
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96
Stack: openais
Current DC: node96 - partition with quorum
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
2 Nodes configured, 2 expected votes
5 Resources configured.
============
Online: [ node95 node96 ]
fence_vm95 (stonith:fence_vmware): Started node96
fence_vm96 (stonith:fence_vmware): Started node95
ping (ocf::pacemaker:ping): Started node96
postgres_res (ocf::heartbeat:pgsql): Started node96
ClusterIp (ocf::heartbeat:IPaddr2): Started node96
主要的应用都运行在node96上,这跟我们配置的策略相符合。
我在另外一台机器上配置了一个ping 代码,不停的ping vip ,来检查系统的可用性
--- 192.168.11.101 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.311/0.385/0.529/0.079 ms
Mon Sep 17 15:31:28 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.364 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.352 ms
64 bytes from 192.168.11.101: icmp_seq=3 ttl=64 time=0.332 ms
64 bytes from 192.168.11.101: icmp_seq=4 ttl=64 time=0.396 ms
64 bytes from 192.168.11.101: icmp_seq=5 ttl=64 time=0.479 ms
测试1:
我们重启node96 主机:
期间一直在ping vip [root@dba-test-11-98 ~]# while true ; do date ; ping -c 2 192.168.11.101; done
Mon Sep 17 15:58:26 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.374 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.434 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.374/0.404/0.434/0.030 ms
Mon Sep 17 15:58:27 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.321 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.445 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.321/0.383/0.445/0.062 ms
Mon Sep 17 15:58:28 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.367 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.442 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.367/0.404/0.442/0.042 ms
Mon Sep 17 15:58:29 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.421 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.335/0.378/0.421/0.043 ms
Mon Sep 17 15:58:30 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.330 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.346 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.330/0.338/0.346/0.008 ms
Mon Sep 17 15:58:31 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.280 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.439 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.280/0.359/0.439/0.081 ms
Mon Sep 17 15:58:32 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.308 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.873 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.308/0.590/0.873/0.283 ms
Mon Sep 17 15:58:33 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.631 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.394 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.394/0.512/0.631/0.120 ms
Mon Sep 17 15:58:35 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.302 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=1.02 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.302/0.662/1.023/0.361 ms
Mon Sep 17 15:58:36 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.390 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.404 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.390/0.397/0.404/0.007 ms
Mon Sep 17 15:58:37 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.320 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=1.98 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.320/1.150/1.980/0.830 ms
Mon Sep 17 15:58:38 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.504 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.335/0.419/0.504/0.086 ms
Mon Sep 17 15:58:39 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.514 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.405 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.405/0.459/0.514/0.058 ms
Mon Sep 17 15:58:40 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.771 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.417 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.417/0.594/0.771/0.177 ms
Mon Sep 17 15:58:41 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.609 ms
--- 192.168.11.101 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms
Mon Sep 17 15:58:41 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=1.08 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=1.30 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 1.086/1.193/1.300/0.107 ms
Mon Sep 17 15:58:42 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.475 ms
没有发生ping 不通的情况。
我们看看系统状态:
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:15 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
postgres_res (ocf::heartbeat:pgsql): Started node96^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node96^M
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:18 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node96^M
.^[[H^[[J============^M
15:56:15秒的时候系统正常,pg数据库,vip 在node96 上启动 node95 node96 都在线
15:56:18秒的时候pg数据库已经在node96 上停止服务了, vip 还运行在node96 上, node95 node96都在线
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:18 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
15:56:18秒同一秒内 vip 停止服务
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:18 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:18秒 同一秒内 vip 已经切换到node95 上面, ping 服务还是跑在node96 上,说明node96 的重启还没有关机。
Last updated: Mon Sep 17 15:56:19 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:20 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:19秒 ping 服务从node96停止, 此时node96 的重启还没有关机关机,fence-vm95还在运行。
15:56:20秒 pg 数据库从node95上启动成功, ping 服务还没有在95上启动,此时node96 还没有完成关机。
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:20 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:20秒 ping服务没有切换到node95 ,此时 fence_node95 已经从node96上停止。node96 还没有完成关机。
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:24 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
......^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:25 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:24秒 ping 服务还没有在node95 上启动,
15:56:25秒 ping 服务在node95上启动成功。 此时 node96 重启关机还没有完成。
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:25 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 ]^M
OFFLINE: [ node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:25秒 ,这个时候 node96 网络关闭了,node96 状态改为offline,
到这里为止,整个系统已经切换完毕了。
ping vip 的服务期间一直都没有中断。
整个切换过程在10秒钟左右完成,因为我们的数据时没有负载的,这个切换时间快了一点。
我们去看看后台的日志。
Sep 17 15:56:18 node95 lrmd: [1837]: info: rsc:ClusterIp:9: start
Sep 17 15:56:18 node95 IPaddr2(ClusterIp)[2037]: INFO: ip -f inet addr add 192.168.11.101/32 brd 192.168.11.101 dev eth0
Sep 17 15:56:18 node95 avahi-daemon[1507]: Registering new address record for 192.168.11.101 on eth0.IPv4.
Sep 17 15:56:18 node95 IPaddr2(ClusterIp)[2037]: INFO: ip link set eth0 up
Sep 17 15:56:18 node95 IPaddr2(ClusterIp)[2037]: INFO: /usr/lib64/heartbeat/send_arp -i 200 -r 5 -p /var/run/heartbeat/rsctmp/send_arp-192.168.11.101 eth0 192.168.11.101 auto not_used not_used
Sep 17 15:56:18 node95 crmd[1840]: info: process_lrm_event: LRM operation ClusterIp_start_0 (call=9, rc=0, cib-update=14, confirmed=true) ok
vip 已经启动了,同时做了arp广播。
Sep 17 15:56:18 node95 lrmd: [1837]: info: rsc:postgres_res:10: start
Sep 17 15:56:18 node95 lrmd: [1837]: info: rsc:ClusterIp:11: monitor
Sep 17 15:56:18 node95 crmd[1840]: info: process_lrm_event: LRM operation ClusterIp_monitor_30000 (call=11, rc=0, cib-update=15, confirmed=false) ok
Sep 17 15:56:19 node95 lrmd: [1837]: info: rsc:ping:12: start
Sep 17 15:56:19 node95 pgsql(postgres_res)[2095]: INFO: server starting
Sep 17 15:56:19 node95 pgsql(postgres_res)[2095]: WARNING: PostgreSQL start SLAVE db role in READ ONLY ,need to contact DBA
Sep 17 15:56:19 node95 pgsql(postgres_res)[2095]: INFO: PostgreSQL is down
Sep 17 15:56:20 node95 pgsql(postgres_res)[2095]: INFO: PostgreSQL is started.
Sep 17 15:56:20 node95 crmd[1840]: info: process_lrm_event: LRM operation postgres_res_start_0 (call=10, rc=0, cib-update=16, confirmed=true) ok
18秒PG 已经在node95 上启动了, 下面的warning 是我们针对pg 的从库,做的设置,是如果是冷启动,我们是不不会发生切换的,这个时候从库是只读访问。