安装oracle RAC ,在图形界面选择网卡信息时报错,节点无法联通。
[INS-41112]Specified network interface doesnt maintain connectivity across cluster nodes
数据库版本:12.1.0.2
操作系统:HP UNIX 11.31
为了判断问题我尝试了,我怀疑是不是12.1.0.2的版本的问题。我上传了11.2.0.4集群的介质,这个版本在惠普系统中安装多次没有出现过这种问题。这次的检查出现了不同的报错!!!
xcywa2:/u01/media1/grid> ./runcluvfy.sh comp nodecon -i lan16 -n xcywa1,xcywa2 -verbose
Verifying node connectivity
Checking node connectivity...
Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
xcywa2 passed
xcywa1 passed
Verification of the hosts config file successful
Interface information for node "xcywa2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
lan17 10.10.10.11 10.10.10.0 10.10.10.11 10.241.8.254 D6:2A:4D:6B:98:8C 1500
lan16 10.241.8.11 10.241.8.0 10.241.8.11 10.241.8.254 8A:98:AD:92:B3:BD 1500
Interface information for node "xcywa1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
lan17 10.10.10.10 10.10.10.0 10.10.10.10 10.241.8.254 F2:69:5F:F0:72:53 1500
lan16 10.241.8.10 10.241.8.0 10.241.8.10 10.241.8.254 F2:97:2C:48:01:C1 1500
Check: Node connectivity for interface "lan16"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
xcywa2[10.241.8.11] xcywa1[10.241.8.10] yes
Result: Node connectivity passed for interface "lan16"
Check: TCP connectivity of subnet "10.241.8.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
xcywa2:10.241.8.11 xcywa1:10.241.8.10 failed
ERROR:
PRVF-7617 : Node connectivity between "xcywa2 : 10.241.8.11" and "xcywa1 : 10.241.8.10" failed
Result: TCP connectivity check failed for subnet "10.241.8.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed for subnet "10.241.8.0".
Subnet mask consistency check passed.
Result: Node connectivity check failed
Verification of node connectivity was unsuccessful on all the specified nodes.
好了,这里我们看到对于相同的网卡出现了不同的明确的检查报错信息:PRVF-7617
出现PRVF-7617有两篇文档:
PRVF-7617: TCP connectivity check failed for subnet (文档 ID 1335136.1)
PRVF-7617 Cluster Verify (cluvfy) Fails For Network Check if Firewall Exists (Doc ID 1357657.1)
在第一篇博文中,我已经列出了PRVF-7617: TCP connectivity check failed for subnet (文档 ID 1335136.1)的内容,这篇对于问题界定没有作用。
在Doc ID 1357657.1文档明确说明由于Firewall Exists 防火墙的存在导致节点连通性检查失败。并且和我们的报错信息一模一样。问题在于linux关闭防火墙非常容易,惠普中没有防火墙,从一个关系不错的惠普工程师联系获取帮助:
编辑:/etc/rc.config.d/ipfconf 更改IPF_START=0
然后停止ipf
/sbin/init.d/ipfboot stop
ipf是TCP管理的模块,也是一种防火墙。
以前安装没有出现这个问题,这次碰巧在不熟悉的版本下安装出现这个问题,现场工程师有时候水平不高或者和数据库工程师在一起磨合的少。往往安装中出现的问题系统方面的比较多。有时候多和少安装一个包、系统方面一个参数、就会对数据库集群的安装带来很大麻烦。排雷的工作是艰辛的,成功排雷后获得的成就感也是值得的!
PRVF-7617 Cluster Verify (cluvfy) Fails For Network Check if Firewall Exists (Doc ID 1357657.1)
In this Document
Applies to: Oracle Database - Enterprise Edition - Version 11.2.0.1 and later During Cluster Verification, a part of cluster installation, the connectivity check between nodes may fail with the following errors Check: TCP connectivity of subnet "10.0.0.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- racnode01:10.0.0.1 racnode02:10.0.0.2 failed
ERROR: PRVF-7617 : Node connectivity between "racnode01 : 10.0.0.1" and "racnode02 : 10.0.0.2" failed Result: TCP connectivity check failed for subnet "10.0.0.0" This may occur on any of the interface.
iptables (a Linux firewall) is active between the nodes, blocking network traffic on the cluster interconnect network. 1. A temporary solution is to disable iptables. A more permanent soution, if iptables is required, is to configure the iptables such that it does not block interconnect traffic(no firewall should exist between cluster nodes). # service iptables save # service ip6tables save Note: IPV6 is not supported with Oracle Clusterware/RAC 11gR2 2. If SElinux is set to enforcing, change it to permissive or disable could help too. To check what mode the system is running: # cat /selinux/enforce Temporary switchoff enforcement, as root user: # echo 0 > /selinux/enforce This will switch to permissive mode, no reboot is required. To permanently switchoff enforcement, edit /etc/selinux/config, change SELINUX to either "permissive" or "disabled". A server reboot is required if change SELINUX to "disabled".
NOTE:1054902.1 - How to Validate Network and Name Resolution Setup for the Clusterware and RAC |
|
|
|
|
|
· Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database - Enterprise Edition > Clusterware > Installation Issues including cluvfy, OUI and root.sh
|
|
|
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/29582917/viewspace-2122704/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/29582917/viewspace-2122704/