--*****************************************
-- 使用 runcluvfy 校验Oracle RAC安装环境
--*****************************************
所谓工欲善其事,必先利其器。安装 Orale RAC 可谓是一个浩大的工程,尤其是没有做好前期的规划与配置工作时将导致安装的复杂
度绝非想象。幸好有runcluvfy工具,这大大简化了安装工作。下面的演示是基于安装Oracle 10g RAC / Linux来完成的。
1.从安装文件路径下使用runcluvfy实施安装前的校验
[oracle@node1 cluvfy]$ pwd
/u01/Clusterware/clusterware/cluvfy
[oracle@node1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
Performing pre-checks forcluster services setup
Checking node reachability...
Check: Node reachability from node "node1"
Destination Node Reachable?
------------------------------------ ------------------------
node1 yes
node2 yes
Result: Node reachability check passed from node "node1".
Checking user equivalence...
Check: User equivalence foruser "oracle"
Node Name Comment
------------------------------------ ------------------------
node2 passed
node1 passed
Result: User equivalence check passed foruser "oracle".
Checking administrative privileges...
Check: Existence ofuser "oracle"
Node Name UserExists Comment
------------ ------------------------ ------------------------
node2 yes passed
node1 yes passed
Result: User existence check passed for "oracle".
Check: Existence ofgroup "oinstall"
Node Name Status GroupID
------------ ------------------------ ------------------------
node2 exists 500
node1 exists 500
Result: Group existence check passed for "oinstall".
Check: Membership ofuser "oracle" ingroup "oinstall" [asPrimary]
Node Name UserExists GroupExists UserinGroup Primary Comment
---------------- ------------ ------------ ------------ ------------ ------------
node2 yes yes yes yes passed
node1 yes yes yes yes passed
Result: Membership checkforuser "oracle" ingroup "oinstall" [asPrimary] passed.
Administrative privilegescheck passed.
Checking node connectivity...
Interface information for node "node2"
Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
eth0 192.168.0.12 192.168.0.0
eth1 10.101.0.12 10.101.0.0
Interface information for node "node1"
Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
eth0 192.168.0.11 192.168.0.0
eth1 10.101.0.11 10.101.0.0
Check: Node connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth0 node1:eth0 yes
Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) node2,node1.
Check: Node connectivity of subnet "10.101.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth1 node1:eth1 yes
Result: Node connectivity check passed for subnet "10.101.0.0" with node(s) node2,node1.
Suitable interfaces fortheprivate interconnect on subnet "192.168.0.0":
node2 eth0:192.168.0.12
node1 eth0:192.168.0.11
Suitable interfaces fortheprivate interconnect on subnet "10.101.0.0":
node2 eth1:10.101.0.12
node1 eth1:10.101.0.11
ERROR:
Could not find a suitable setof interfaces for VIPs.
Result: Node connectivity check failed.
Checking system requirements for'crs'...
Check: Total memory
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 689.38MB (705924KB) 512MB (524288KB) passed
node1 689.38MB (705924KB) 512MB (524288KB) passed
Result: Total memorycheck passed.
Check: Free diskspacein "/tmp" dir
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 4.22GB (4428784KB) 400MB (409600KB) passed
node1 4.22GB (4426320KB) 400MB (409600KB) passed
Result: Free diskspacecheck passed.
Check: Swap space
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 2GB (2096472KB) 1GB (1048576KB) passed
node1 2GB (2096472KB) 1GB (1048576KB) passed
Result: Swap spacecheck passed.
Check: System architecture
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 i686 i686 passed
node1 i686 i686 passed
Result: System architecture check passed.
Check: Kernel version
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 2.6.18-194.el5 2.4.21-15EL passed
node1 2.6.18-194.el5 2.4.21-15EL passed
Result: Kernel versioncheck passed.
Check: Package existence for "make-3.79"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 make-3.81-3.el5 passed
node1 make-3.81-3.el5 passed
Result: Package existence check passed for "make-3.79".
Check: Package existence for "binutils-2.14"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 binutils-2.17.50.0.6-14.el5 passed
node1 binutils-2.17.50.0.6-14.el5 passed
Result: Package existence check passed for "binutils-2.14".
Check: Package existence for "gcc-3.2"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 gcc-4.1.2-48.el5 passed
node1 gcc-4.1.2-48.el5 passed
Result: Package existence check passed for "gcc-3.2".
Check: Package existence for "glibc-2.3.2-95.27"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 glibc-2.5-49 passed
node1 glibc-2.5-49 passed
Result: Package existence check passed for "glibc-2.3.2-95.27".
Check: Package existence for "compat-db-4.0.14-5"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 compat-db-4.2.52-5.1 passed
node1 compat-db-4.2.52-5.1 passed
Result: Package existence check passed for "compat-db-4.0.14-5".
Check: Package existence for "compat-gcc-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result: Package existence checkfailedfor "compat-gcc-7.3-2.96.128".
Check: Package existence for "compat-gcc-c++-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result: Package existence checkfailedfor "compat-gcc-c++-7.3-2.96.128".
Check: Package existence for "compat-libstdc++-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result: Package existence checkfailedfor "compat-libstdc++-7.3-2.96.128".
Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result: Package existence checkfailedfor "compat-libstdc++-devel-7.3-2.96.128".
Check: Package existence for "openmotif-2.2.3"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 openmotif-2.3.1-2.el5_4.1 passed
node1 openmotif-2.3.1-2.el5_4.1 passed
Result: Package existence check passed for "openmotif-2.2.3".
Check: Package existence for "setarch-1.3-1"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 setarch-2.0-1.1 passed
node1 setarch-2.0-1.1 passed
Result: Package existence check passed for "setarch-1.3-1".
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result: Group existence check passed for "dba".
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result: Group existence check passed for "oinstall".
Check: User existence for "nobody"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result: User existence check passed for "nobody".
System requirement failedfor'crs'
Pre-checkforcluster services setup was unsuccessful onallthe nodes.
Could not find a suitable setof interfaces for VIPs.”,可以忽略该错误
信息,这是一个bug,Metalink中有详细说明,doc.id:338924.1。参考本文尾部列出的内容。
对于上面描述的failed的包,尽可能的将其安装到系统。
2.安装Clusterware 后的检查,注意,此时执行的cluvfy是位于已安装的路径
[oracle@node1 ~]$ pwd
/u01/app/oracle/product/10.2.0/crs_1/bin
[oracle@node1 ~]$./cluvfy stage -post crsinst -n node1,node2
Performing post-checks forcluster services setup
Checking node reachability...
Node reachability check passed from node "node1".
Checking user equivalence...
User equivalence check passed foruser "oracle".
Checking Cluster manager integrity...
Checking CSS daemon...
Daemon status check passed for "CSS daemon".
Cluster manager integrity check passed.
Checking cluster integrity...
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.
Uniqueness checkfor OCR device passed.
Checking theversionof OCR...
OCR of correct Version "2" exists.
Checking data integrity of OCR...
Data integrity checkfor OCR passed.
OCR integrity check passed.
Checking CRS integrity...
Checking daemon liveness...
Liveness check passed for "CRS daemon".
Checking daemon liveness...
Liveness check passed for "CSS daemon".
Checking daemon liveness...
Liveness check passed for "EVM daemon".
Checking CRS health...
CRS health check passed.
CRS integrity check passed.
Checking node application existence...
Checking existence of VIP node application (required)
Check passed.
Checking existence of ONS node application (optional)
Check passed.
Checking existence of GSD node application (optional)
Check passed.
Post-checkforcluster services setup was successful.
从上面的校验可以看出,Clusterware的相关后台进程,nodeapps相关资源以及OCR等处于passed状态,即Clusterware成功安装
3.cluvfy的用法
[oracle@node1 ~]$ cluvfy -help #直接使用-help参数即可获得cluvfy的帮助信息
USAGE:
cluvfy [ -help ]
cluvfy stage { -list | -help }
cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]
cluvfy comp { -list | -help }
cluvfy comp <component-name> <component-specific options> [-verbose]
[oracle@node1 ~]$ cluvfy comp -list
USAGE:
cluvfy comp <component-name> <component-specific options> [-verbose]
Valid components are:
nodereach : checks reachability between nodes
nodecon : checks node connectivity
cfs : checks CFS integrity
ssa : checks sharedstorage accessibility
space : checks spaceavailability
sys : checks minimumsystem requirements
clu : checks cluster integrity
clumgr : checks cluster manager integrity
ocr : checks OCR integrity
crs : checks CRS integrity
nodeapp : checks node applications existence
admprv : checks administrative privileges
peer : compares properties with peers
4.ID 338924.1
CLUVFY Fails WithError: Could not find a suitable setof interfaces for VIPs [ID338924.1]
________________________________________
Modified 29-JUL-2010 Type PROBLEM Status PUBLISHED
In this Document
Symptoms
Cause
Solution
References
________________________________________
Applies to:
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to11.1.0.7 - Release: 10.2to11.1
Information in this document applies toany platform.
Symptoms
When running cluvfy tochecknetwork connectivity at various stages ofthe RAC/CRS installation process, cluvfy fails
witherrors similar tothefollowing:
=========================
Suitable interfaces fortheprivate interconnect on subnet "10.0.0.0":
node1 eth0:10.0.0.1
node2 eth0:10.0.0.2
Suitable interfaces fortheprivate interconnect on subnet "192.168.1.0":
node1_internal eth1:192.168.1.2
node2_internal eth1:192.168.1.1
ERROR:
Could not find a suitable setof interfaces for VIPs.
Result: Node connectivity check failed.
========================
On Oracle 11g, you may still see a warning insome cases, such as:
========================
WARNING:
Could not find a suitable setof interfaces for VIPs.
========================
Output seen will be comparable to that noted above, but IP addresses and node_names may be different - i.e. the node names
of'node1','node2','node1_internal','node2_internal' will be substituted with your actual PublicandPrivate node names.
A second problem that will be encountered in this situation is that attheendofthe CRS installation for10gR2, VIPCA
will be run automatically in silent mode, asoneofthe'optional' configuration assistants. In this scenario, the VIPCA
will fail attheendofthe CRS installation. The InstallActions log will show output such as:
> />> Oracle CRS stack installed and running underinit(1M)
> />> Running vipca(silent) for configuring nodeapps
> />> The given interface(s), "eth0" isnot public. Public interfaces should
> />> be used to configure virtual IPs.
Cause
This issue occurs due to incorrect assumptions made in cluvfy and vipca based on an Internet Best Practice document -
"RFC1918 - Address Allocation for Private Internets". This Internet Best Practice RFC can be viewed here:
http://www.faqs.org/rfcs/rfc1918.html
From an Oracle perspective, this issue is tracked in BUG:4437727
Per BUG:4437727, cluvfy makes an incorrect assumption based on RFC 1918 that any IP address/subnet that begins withany
ofthefollowing octets isprivateand hence may not be fit foruseas a VIP:
172.16.x.x through172.31.x.x
192.168.x.x
10.x.x.x
However, this assumption does not take intoaccount that it is possible touse these IPs asPublic IP's on an internal
network (or intranet). Therefore, it is very common to use IP addresses in these ranges as Public IP's andasVirtual
IP(s), and this is a supported configuration.
Solution
The solution totheerror above that is given when running 'cluvfy'isto simply ignore it if you intend touse an IP in
oneofthe above ranges for your VIP. The installation and configuration can continuewithno corrective action necessary.
Oneresultof this, as noted inthe problem section, is that the silent VIPCA will fail attheendofthe10gR2 CRS
installation. This is because VIPCA is running in silent modeandis trying to notify that the IPs that were provided
may not be fit to be used as VIP(s). To correct this, you can manually executethe VIPCA GUI afterthe CRS installation
is complete. VIPCA needs to be executed fromthe CRS_HOME/bindirectoryasthe'root' user (on Unix/Linux) oras a
Local Administrator (on Windows):
$ cd $ORA_CRS_HOME/bin
$ ./vipca
Follow the prompts for VIPCA toselectthe appropriate interface forthepublicnetwork, and assign the VIPs foreach node
when prompted. Manually running VIPCA inthe GUI mode, usingthe same IP addresses, should complete successfully.
Note that if you patch to10.2.0.3 or above, VIPCA will run correctly in silent mode. The command to re-run vipca
silently can be foundin CRS_HOME/cfgtoollogs (or CRS_HOME/cfgtoollogs) inthefile'configToolAllCommands'or
'configToolFailedCommands'. Thus, inthecaseof a new install, the silent mode VIPCA command will fail afterthe
10.2.0.1 base release install, but once the CRS Home is patched to10.2.0.3 or above, vipca can be re-run silently,
withoutthe need to invoke the GUI tool
References
NOTE:316583.1 - VIPCA FAILS COMPLAINING THAT INTERFACE ISNOTPUBLIC
Related
________________________________________
Products
________________________________________
? Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition
Keywords
________________________________________
INSTALLATION FAILS; INTERCONNECT; PRIVATE INTERCONNECT; PRIVATE NETWORKS
Errors
________________________________________RFC-1918
上面的描述很多,下面给出处理办法
在出现错误的节点修改vipca 文件
[root@node2 ~]# vi $CRS_ORA_HOME/bin/vipca
找到如下内容:
Remove this workaround whenthe bug 3937317is fixed
arch=`uname -m`
if [ "$arch" = "i686" -o "$arch" = "ia64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
#End workaround
在fi 后新添加一行:
unset LD_ASSUME_KERNEL
以及srvctl 文件
[root@node2 ~]# vi $CRS_ORA_HOME/bin/srvctl
找到如下内容:
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
同样在其后新增加一行:
unset LD_ASSUME_KERNEL
保存退出,然后在故障重新执行root.sh