模拟RAC两个节点内联网不通
环境:RHEL5.8 RAC 11.2.0.3.0
1:2节点集群是正常的
[grid@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.CRSDATA.dg ora....up.type ONLINE ONLINE rac1
ora.DATA.dg ora....up.type ONLINE ONLINE rac1
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac2
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.chris.db ora....se.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac2
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac2
ora.ons ora.ons.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora....ry.acfs ora....fs.type ONLINE ONLINE rac1
ora.scan1.vip ora....ip.type ONLINE ONLINE rac2
2:关闭节点2的内网网卡
[root@rac2 ~]# date
Tue Jul 16 11:29:41 CST 2013
[root@rac2 ~]# ifdown eth1
3:查看集群信息:
[grid@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.CRSDATA.dg ora....up.type ONLINE ONLINE rac1
ora.DATA.dg ora....up.type ONLINE ONLINE rac1
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.chris.db ora....se.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora.rac2.vip ora....t1.type ONLINE ONLINE rac1
ora....ry.acfs ora....fs.type ONLINE ONLINE rac1
ora.scan1.vip ora....ip.type ONLINE ONLINE rac1
发现节点2集群信息已经不存在了
4:检查节点2在11:29:41点后的os日志信息:
[root@rac2 ~]# tail -f /var/log/messages
Jul 16 11:30:02 rac2 avahi-daemon[3029]: Withdrawing address record for 192.168.1.151 on eth1.
Jul 16 11:30:02 rac2 avahi-daemon[3029]: Leaving mDNS multicast group on interface eth1.IPv4 with address 192.168.1.151.
Jul 16 11:30:02 rac2 avahi-daemon[3029]: Joining mDNS multicast group on interface eth1.IPv4 with address 169.254.241.123.
Jul 16 11:30:02 rac2 avahi-daemon[3029]: IP_ADD_MEMBERSHIP failed: No such device
Jul 16 11:30:02 rac2 avahi-daemon[3029]: Withdrawing address record for 169.254.241.123 on eth1.
Jul 16 11:30:02 rac2 avahi-daemon[3029]: Interface eth1.IPv4 no longer relevant for mDNS.
Jul 16 11:30:02 rac2 avahi-daemon[3029]: Withdrawing address record for fe80::a00:27ff:fe39:ce0a on eth1.
Jul 16 11:30:02 rac2 avahi-daemon[3029]: Leaving mDNS multicast group on interface eth1.IPv6 with address fe80::a00:27ff:fe39:ce0a.
Jul 16 11:30:02 rac2 avahi-daemon[3029]: iface.c: interface_mdns_mcast_join() called but no local address available.
Jul 16 11:30:02 rac2 avahi-daemon[3029]: Interface eth1.IPv6 no longer relevant for mDNS.
Jul 16 11:30:34 rac2 avahi-daemon[3029]: Withdrawing address record for 10.13.12.152 on eth0.
Jul 16 11:30:34 rac2 avahi-daemon[3029]: Withdrawing address record for 10.13.12.156 on eth0.
此时Withdrawing address record for 192.168.1.151 on eth1.已经很明确的告诉了我们,eth1上的IP地址已经被撤回。
一系列报错,实例和asm也关闭了:
alterrac2.log:
2013-07-16 11:30:17.452
[cssd(3335)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval. Removal of this node from cluster in 14.540 seconds
2013-07-16 11:30:25.449
[cssd(3335)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval. Removal of this node from cluster in 6.510 seconds
2013-07-16 11:30:29.445
[cssd(3335)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval. Removal of this node from cluster in 2.500 seconds
2013-07-16 11:30:31.947
[cssd(3335)]CRS-1609:This node is unable to communicate with other nodes in the cluster and is going down to preserve cluster integrity; details at (:CSSNM00008:) in /u01/app/11.2.0.3/grid/log/rac2/cssd/ocssd.log.
2013-07-16 11:30:31.947
[cssd(3335)]CRS-1656:The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /u01/app/11.2.0.3/grid/log/rac2/cssd/ocssd.log
2013-07-16 11:30:32.449
[cssd(3335)]CRS-1608:This node was evicted by node 1, rac1; details at (:CSSNM00005:) in /u01/app/11.2.0.3/grid/log/rac2/cssd/ocssd.log.
2013-07-16 11:30:32.696
[cssd(3335)]CRS-1608:This node was evicted by node 1, rac1; details at (:CSSNM00005:) in /u01/app/11.2.0.3/grid/log/rac2/cssd/ocssd.log.
2013-07-16 11:30:32.943
[cssd(3335)]CRS-1608:This node was evicted by node 1, rac1; details at (:CSSNM00005:) in /u01/app/11.2.0.3/grid/log/rac2/cssd/ocssd.log.
2013-07-16 11:30:33.133
[cssd(3335)]CRS-1652:Starting clean up of CRSD resources.
2013-07-16 11:30:36.061
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(3981)]CRS-5016:Process "/u01/app/11.2.0.3/grid/opmn/bin/onsctli" spawned by agent "/u01/app/11.2.0.3/grid/bin/oraagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0.3/grid/log/rac2/agent/crsd/oraagent_grid/oraagent_grid.log"
2013-07-16 11:30:37.316
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(3981)]CRS-5016:Process "/u01/app/11.2.0.3/grid/bin/lsnrctl" spawned by agent "/u01/app/11.2.0.3/grid/bin/oraagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0.3/grid/log/rac2/agent/crsd/oraagent_grid/oraagent_grid.log"
2013-07-16 11:30:37.400
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(3981)]CRS-5016:Process "/u01/app/11.2.0.3/grid/bin/lsnrctl" spawned by agent "/u01/app/11.2.0.3/grid/bin/oraagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0.3/grid/log/rac2/agent/crsd/oraagent_grid/oraagent_grid.log"
2013-07-16 11:30:37.413
[cssd(3335)]CRS-1654:Clean up of CRSD resources finished successfully.
2013-07-16 11:30:37.415
[cssd(3335)]CRS-1655:CSSD on node rac2 detected a problem and started to shutdown.
2013-07-16 11:30:38.049
[cssd(3335)]CRS-1660:The CSS daemon shutdown has completed
2013-07-16 11:30:38.001
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(3981)]CRS-5822:Agent '/u01/app/11.2.0.3/grid/bin/oraagent_grid' disconnected from server. Details at (:CRSAGF00117:) {0:2:6} in /u01/app/11.2.0.3/grid/log/rac2/agent/crsd/oraagent_grid/oraagent_grid.log.
2013-07-16 11:30:38.002
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8579)]CRS-5822:Agent '/u01/app/11.2.0.3/grid/bin/oraagent_oracle' disconnected from server. Details at (:CRSAGF00117:) {0:8:1180} in /u01/app/11.2.0.3/grid/log/rac2/agent/crsd/oraagent_oracle/oraagent_oracle.log.
2013-07-16 11:30:40.781
[/u01/app/11.2.0.3/grid/bin/orarootagent.bin(3987)]CRS-5822:Agent '/u01/app/11.2.0.3/grid/bin/orarootagent_root' disconnected from server. Details at (:CRSAGF00117:) {0:1:14064} in /u01/app/11.2.0.3/grid/log/rac2/agent/crsd/orarootagent_root/orarootagent_root.log.
2013-07-16 11:30:46.076
[ohasd(2976)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2013-07-16 11:30:51.157
[ohasd(2976)]CRS-2765:Resource 'ora.ctssd' has failed on server 'rac2'.
2013-07-16 11:30:51.182
[ohasd(2976)]CRS-2765:Resource 'ora.evmd' has failed on server 'rac2'.
2013-07-16 11:30:51.658
[ohasd(2976)]CRS-2765:Resource 'ora.cluster_interconnect.haip' has failed on server 'rac2'.
2013-07-16 11:30:53.069
[grid@rac2 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
[grid@rac2 ~]$ ps -ef |grep asm_
grid 24856 24764 0 12:15 pts/0 00:00:00 grep asm_
[grid@rac2 ~]$ ps -ef |grep ora_
grid 24863 24764 0 12:15 pts/0 00:00:00 grep ora_
5:检测节点2的eth1的IP信息:
[root@rac2 rac2]# ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:7E:55:A2
inet addr:10.13.12.151 Bcast:10.13.12.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe7e:55a2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3233951 errors:0 dropped:0 overruns:0 frame:0
TX packets:2944115 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:206162039 (196.6 MiB) TX bytes:185034492 (176.4 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:6074681 errors:0 dropped:0 overruns:0 frame:0
TX packets:6074681 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4408302397 (4.1 GiB) TX bytes:4408302397 (4.1 GiB)
发现eth1信息已经不存在
6:检测eth1状态
[root@rac2 rac2]# ethtool eth1
Settings for eth1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: umbg
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: no
发现Link detected: no 没有连接,此时基本可以确定是因为eth1网卡出现问题导致
7:尝试启动eth1
[root@rac2 rac2]# ifup eth1
[root@rac2 rac2]# ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:7E:55:A2
inet addr:10.13.12.151 Bcast:10.13.12.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe7e:55a2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3234866 errors:0 dropped:0 overruns:0 frame:0
TX packets:2944876 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:206223173 (196.6 MiB) TX bytes:185088986 (176.5 MiB)
eth1 Link encap:Ethernet HWaddr 08:00:27:39:CE:0A
inet addr:192.168.1.151 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe39:ce0a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:29191531 errors:0 dropped:0 overruns:0 frame:0
TX packets:44185710 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13825036931 (12.8 GiB) TX bytes:39052959687 (36.3 GiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:6074868 errors:0 dropped:0 overruns:0 frame:0
TX packets:6074868 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4408312520 (4.1 GiB) TX bytes:4408312520 (4.1 GiB)
启动成功
8:查看节点2的os信息,crs日志,alter日志,并无新的输出
[grid@rac2 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
9:尝试启动CRS,启动失败
[root@rac2 ~]# cd /u01/app/11.2.0.3/grid/bin/
[root@rac2 bin]# ./crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
10:尝试停止CRS,失败
[root@rac2 bin]# ./crsctl stop crs
CRS-2796: The command may not proceed when Cluster Ready Services is not running
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.
[root@rac2 bin]# ./crsctl start has
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
11:强制停止CRS,再启动
[root@rac2 bin]# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac2 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
提示成功启动
12:检查日志
[root@rac2 ~]# tail -f /var/log/messages
Jul 16 12:40:17 rac2 kernel: ACFSK-0037: Module load succeeded. Build information: (LOW DEBUG) USM_11.2.0.3.0_LINUX.X64_110803.1 2011/08/04 10:32:50
Jul 16 12:40:17 rac2 kernel: OKSK-00010: Persistent OKS log opened at /u01/app/11.2.0.3/grid/log/rac2/acfs/acfs.log.0.
Jul 16 12:40:48 rac2 avahi-daemon[3029]: Registering new address record for 10.13.12.156 on eth0.
Jul 16 12:40:48 rac2 avahi-daemon[3029]: Withdrawing address record for 10.13.12.156 on eth0.
Jul 16 12:40:48 rac2 avahi-daemon[3029]: Registering new address record for 10.13.12.156 on eth0.
Jul 16 12:40:48 rac2 avahi-daemon[3029]: Withdrawing address record for 10.13.12.156 on eth0.
Jul 16 12:40:48 rac2 avahi-daemon[3029]: Registering new address record for 10.13.12.156 on eth0.
注册回IP地址
[grid@rac2 ~]$ tail -f /u01/app/11.2.0.3/grid/log/rac2/alertrac2.log
[ctssd(28263)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
[client(28297)]CRS-10001:16-Jul-13 12:40 ACFS-9391: Checking for existing ADVM/ACFS installation.
[client(28308)]CRS-10001:16-Jul-13 12:40 ACFS-9392: Validating ADVM/ACFS installation files for operating system.
[client(28310)]CRS-10001:16-Jul-13 12:40 ACFS-9393: Verifying ASM Administrator setup.
[client(28313)]CRS-10001:16-Jul-13 12:40 ACFS-9308: Loading installed ADVM/ACFS drivers.
[client(28316)]CRS-10001:16-Jul-13 12:40 ACFS-9154: Loading 'oracleoks.ko' driver.
[client(28327)]CRS-10001:16-Jul-13 12:40 ACFS-9154: Loading 'oracleadvm.ko' driver.
[client(28348)]CRS-10001:16-Jul-13 12:40 ACFS-9154: Loading 'oracleacfs.ko' driver.
[client(28436)]CRS-10001:16-Jul-13 12:40 ACFS-9327: Verifying ADVM/ACFS devices.
[client(28438)]CRS-10001:16-Jul-13 12:40 ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
[client(28443)]CRS-10001:16-Jul-13 12:40 ACFS-9156: Detecting control device '/dev/ofsctl'.
[client(28448)]CRS-10001:16-Jul-13 12:40 ACFS-9322: completed
2013-07-16 12:40:17.713
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(28047)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/app/11.2.0.3/grid/log/rac2/agent/ohasd/oraagent_grid/oraagent_grid.log"
2013-07-16 12:40:18.486
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(28047)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/app/11.2.0.3/grid/log/rac2/agent/ohasd/oraagent_grid/oraagent_grid.log"
2013-07-16 12:40:24.079
[ctssd(28263)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-07-16 12:40:41.602
[crsd(28606)]CRS-1012:The OCR service started on node rac2.
2013-07-16 12:40:41.641
[evmd(28289)]CRS-1401:EVMD started on node rac2.
2013-07-16 12:40:43.427
[crsd(28606)]CRS-1201:CRSD started on node rac2.
OCR,EVMD,CRSD都已经启动了
[grid@rac2 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
检查CRS正常
13:检测集群情况
[grid@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.CRSDATA.dg ora....up.type ONLINE ONLINE rac1
ora.DATA.dg ora....up.type ONLINE ONLINE rac1
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.chris.db ora....se.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac2
ora.ons ora.ons.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora....ry.acfs ora....fs.type ONLINE ONLINE rac1
ora.scan1.vip ora....ip.type ONLINE ONLINE rac1
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.registry.acfs
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.chris.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac2
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
发现相比之前rac2少了一个ora.cvu 资源,转移到rac1上去了,CVU( (Cluster Verification Utility) ),集群恢复完成。