10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA / SRVCTL / OUI Failures) (Doc ID 41

在OEL 5.8上安装oracle clusterware 10.2.0.1 在最后一个node上执行root..sh的时候会出现以下问题:

Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/home/oracle/crs/oracle/product/10/crs/jdk/jre//bin/java: error while loading 
shared libraries: libpthread.so.0: cannot open shared object file: 
No such file or directory 

或者是

Error 0(Native: listNetInterfaces:[3]) 
[Error 0(Native: listNetInterfaces:[3])]

因为在10gR2是在最后一个node执行root.sh时去配置VIP。以上如果最后一步VIP设失败,整个clusterware就安装失败了

以下metalink上文章可以解决以上问题




Click to add to FavoritesTo BottomTo Bottom

In this Document

Symptoms
 Cause
 Solution
 Scalability RAC Community
 References

APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.2.0.1 to 10.2.0.3 [Release 10.2]
Linux x86
Generic Linux
Linux x86-64
***Checked for relevance on 04-Aug-2010***
***Checked for relevance on 11-Mar-2013***


SYMPTOMS

When installing 10gR2 RAC on Oracle Enterprise Linux 5 or RHEL5 or SLES10 there are three issues that users must be aware of.

Issue#1: To install 10gR2, you must first install the base release, which is 10.2.0.1. As these version of OS are newer, you should use the following command to invoke the installer:

$ runInstaller -ignoreSysPrereqs        // This will bypass the OS check //


Issue#2:  At end of root.sh on the last node vipca will fail to run with the following error:

Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/home/oracle/crs/oracle/product/10/crs/jdk/jre//bin/java: error while loading 
shared libraries: libpthread.so.0: cannot open shared object file: 
No such file or directory 

Also, srvctl will show similar output if workaround below is not implemented.

Issue#3: After working around Issue#2 above, vipca will fail to run with the following error if the VIP IP's are in a non-routable range [10.x.x.x, 172.(16-31).x.x or 192.168.x.x]:

# vipca
Error 0(Native: listNetInterfaces:[3]) 
[Error 0(Native: listNetInterfaces:[3])]

CAUSE

These releases of the Linux kernel fix an old bug in the Linux threading that Oracle worked around using LD_ASSUME_KERNEL settings in both vipca and srvctl, this workaround is no longer valid on OEL5 or RHEL5 or SLES10 hence the failures.

SOLUTION

If you have not yet run root.sh on the last node, implement workaround for issue#2 below and run root.sh (you may  skip runningthe vipca portion at the bottom of this note).  
If you have a non-routable IP range for VIPs you will also need workaround for issue# 3 and then run vipca manually.

To workaround Issue#2 above, edit vipca (in the CRS bin directory on all nodes) to undo the setting of LD_ASSUME_KERNEL. After the IF statement around line 120 add an unset command to ensure LD_ASSUME_KERNEL is not set as follows:

if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
then
  LD_ASSUME_KERNEL=2.4.19
  export LD_ASSUME_KERNEL
fi

unset LD_ASSUME_KERNEL         <<<== Line to be added

 

Similarly for srvctl (in both the CRS and, when installed, RDBMS and ASM bin directories on all nodes), unset LD_ASSUME_KERNEL by adding one line, around line 168 should look like this:

LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL

unset LD_ASSUME_KERNEL          <<<== Line to be added

 

Remember to re-edit these files on all nodes:
<CRS_HOME>/bin/vipca 
<CRS_HOME>/bin/srvctl 
<RDBMS_HOME>/bin/srvctl
<ASM_HOME>/bin/srvctl

after applying the 10.2.0.2 or 10.2.0.3 patchsets, as these patchset will still include those settings unnecessary for OEL5 or RHEL5 or SLES10
.   This issue was raised with development and is  fixed in the 10.2.0.4 patchsets.

Note that we are explicitly unsetting LD_ASSUME_KERNEL and not merely commenting out its setting to handle a case where the user has it set in their environment (login shell).

 

To workaround issue#3 (vipca failing on non-routable VIP IP ranges, manually or during root.sh), if you still have the OUI window open, click OK and it will create the "oifcfg" information, then cluvfy will fail due to vipca not completed successfully, skip below in this note and run vipca manually then return to the installer and cluvfy will succeed.  Otherwise you may configure the interfaces for RAC manually using the oifcfg command as root, like in the following example (from any node):

<CRS_HOME>/bin # ./oifcfg setif -global eth0/192.168.1.0:public 
<CRS_HOME>/bin # ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect 
<CRS_HOME>/bin # ./oifcfg getif 
 eth0 192.168.1.0 global public 
 eth1 10.10.10.0 global cluster_interconnect

 

The goal is to get the output of "oifcfg getif" to include both public and cluster_interconnect interfaces, of course you should exchange your own IP addresses and interface name from your environment. To get the proper IPs in your environment run this command:

<CRS_HOME>/bin # ./oifcfg iflist
eth0 192.168.1.0
eth1 10.10.10.0 



If you have not yet run root.sh on the last node, implement workaround for issue #2 above and run root.sh (you may  skip runningthe vipca portion below. If you have a non-routable IP range for VIPs you will also need workaround for issue# 3 above, and then run vipca manually.


Running VIPCA:

After implementing the above workaround(s), you should be able invoke vipca (as root, from last node) manually and configure the VIP IPs via the GUI interface.

<CRS_HOME>/bin # export DISPLAY=<x-display:0>
<CRS_HOME>/bin # ./vipca

Make sure the DISPLAY environment variable is set correctly and you can open X-clock or other X applications from that shell.

Once vipca completes running, all the Clusterware resources (VIP, GSD, ONS) will be started, there is no need to re-run root.shsince vipca is the last step in root.sh. 

 

To verify the Clusterware resources are running correctly:

<CRS_HOME>/bin # ./crs_stat -t
Name           Type        Target State  Host 
------------------------------------------------------------
ora....ux1.gsd application ONLINE ONLINE raclinux1 
ora....ux1.ons application ONLINE ONLINE raclinux1 
ora....ux1.vip application ONLINE ONLINE raclinux1
ora....ux2.gsd application ONLINE ONLINE raclinux2
ora....ux2.ons application ONLINE ONLINE raclinux2
ora....ux2.vip application ONLINE ONLINE raclinux2


You may now proceed with the rest of the RAC installation.

Scalability RAC Community

To discuss this topic further with Oracle experts and industry peers,


Click to add to FavoritesTo BottomTo Bottom

In this Document

Symptoms
 Cause
 Solution
 Scalability RAC Community
 References

APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.2.0.1 to 10.2.0.3 [Release 10.2]
Linux x86
Generic Linux
Linux x86-64
***Checked for relevance on 04-Aug-2010***
***Checked for relevance on 11-Mar-2013***


SYMPTOMS

When installing 10gR2 RAC on Oracle Enterprise Linux 5 or RHEL5 or SLES10 there are three issues that users must be aware of.

Issue#1: To install 10gR2, you must first install the base release, which is 10.2.0.1. As these version of OS are newer, you should use the following command to invoke the installer:

$ runInstaller -ignoreSysPrereqs        // This will bypass the OS check //


Issue#2:  At end of root.sh on the last node vipca will fail to run with the following error:

Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/home/oracle/crs/oracle/product/10/crs/jdk/jre//bin/java: error while loading 
shared libraries: libpthread.so.0: cannot open shared object file: 
No such file or directory 

Also, srvctl will show similar output if workaround below is not implemented.

Issue#3: After working around Issue#2 above, vipca will fail to run with the following error if the VIP IP's are in a non-routable range [10.x.x.x, 172.(16-31).x.x or 192.168.x.x]:

# vipca
Error 0(Native: listNetInterfaces:[3]) 
[Error 0(Native: listNetInterfaces:[3])]

CAUSE

These releases of the Linux kernel fix an old bug in the Linux threading that Oracle worked around using LD_ASSUME_KERNEL settings in both vipca and srvctl, this workaround is no longer valid on OEL5 or RHEL5 or SLES10 hence the failures.

SOLUTION

If you have not yet run root.sh on the last node, implement workaround for issue#2 below and run root.sh (you may  skip runningthe vipca portion at the bottom of this note).  
If you have a non-routable IP range for VIPs you will also need workaround for issue# 3 and then run vipca manually.

To workaround Issue#2 above, edit vipca (in the CRS bin directory on all nodes) to undo the setting of LD_ASSUME_KERNEL. After the IF statement around line 120 add an unset command to ensure LD_ASSUME_KERNEL is not set as follows:

if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
then
  LD_ASSUME_KERNEL=2.4.19
  export LD_ASSUME_KERNEL
fi

unset LD_ASSUME_KERNEL         <<<== Line to be added

 

Similarly for srvctl (in both the CRS and, when installed, RDBMS and ASM bin directories on all nodes), unset LD_ASSUME_KERNEL by adding one line, around line 168 should look like this:

LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL

unset LD_ASSUME_KERNEL          <<<== Line to be added

 

Remember to re-edit these files on all nodes:
<CRS_HOME>/bin/vipca 
<CRS_HOME>/bin/srvctl 
<RDBMS_HOME>/bin/srvctl
<ASM_HOME>/bin/srvctl

after applying the 10.2.0.2 or 10.2.0.3 patchsets, as these patchset will still include those settings unnecessary for OEL5 or RHEL5 or SLES10
.   This issue was raised with development and is  fixed in the 10.2.0.4 patchsets.

Note that we are explicitly unsetting LD_ASSUME_KERNEL and not merely commenting out its setting to handle a case where the user has it set in their environment (login shell).

 

To workaround issue#3 (vipca failing on non-routable VIP IP ranges, manually or during root.sh), if you still have the OUI window open, click OK and it will create the "oifcfg" information, then cluvfy will fail due to vipca not completed successfully, skip below in this note and run vipca manually then return to the installer and cluvfy will succeed.  Otherwise you may configure the interfaces for RAC manually using the oifcfg command as root, like in the following example (from any node):

<CRS_HOME>/bin # ./oifcfg setif -global eth0/192.168.1.0:public 
<CRS_HOME>/bin # ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect 
<CRS_HOME>/bin # ./oifcfg getif 
 eth0 192.168.1.0 global public 
 eth1 10.10.10.0 global cluster_interconnect

 

The goal is to get the output of "oifcfg getif" to include both public and cluster_interconnect interfaces, of course you should exchange your own IP addresses and interface name from your environment. To get the proper IPs in your environment run this command:

<CRS_HOME>/bin # ./oifcfg iflist
eth0 192.168.1.0
eth1 10.10.10.0 



If you have not yet run root.sh on the last node, implement workaround for issue #2 above and run root.sh (you may  skip runningthe vipca portion below. If you have a non-routable IP range for VIPs you will also need workaround for issue# 3 above, and then run vipca manually.


Running VIPCA:

After implementing the above workaround(s), you should be able invoke vipca (as root, from last node) manually and configure the VIP IPs via the GUI interface.

<CRS_HOME>/bin # export DISPLAY=<x-display:0>
<CRS_HOME>/bin # ./vipca

Make sure the DISPLAY environment variable is set correctly and you can open X-clock or other X applications from that shell.

Once vipca completes running, all the Clusterware resources (VIP, GSD, ONS) will be started, there is no need to re-run root.shsince vipca is the last step in root.sh. 

 

To verify the Clusterware resources are running correctly:

<CRS_HOME>/bin # ./crs_stat -t
Name           Type        Target State  Host 
------------------------------------------------------------
ora....ux1.gsd application ONLINE ONLINE raclinux1 
ora....ux1.ons application ONLINE ONLINE raclinux1 
ora....ux1.vip application ONLINE ONLINE raclinux1
ora....ux2.gsd application ONLINE ONLINE raclinux2
ora....ux2.ons application ONLINE ONLINE raclinux2
ora....ux2.vip application ONLINE ONLINE raclinux2


You may now proceed with the rest of the RAC installation.

Scalability RAC Community

To discuss this topic further with Oracle experts and industry peers,

Python网络爬虫与推荐算法新闻推荐平台:网络爬虫:通过Python实现新浪新闻的爬取,可爬取新闻页面上的标题、文本、图片、视频链接(保留排版) 推荐算法:权重衰减+标签推荐+区域推荐+热点推荐.zip项目工程资源经过严格测试可直接运行成功且功能正常的情况才上传,可轻松复刻,拿到资料包后可轻松复现出一样的项目,本人系统开发经验充足(全领域),有任何使用问题欢迎随时与我联系,我会及时为您解惑,提供帮助。 【资源内容】:包含完整源码+工程文件+说明(如有)等。答辩评审平均分达到96分,放心下载使用!可轻松复现,设计报告也可借鉴此项目,该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的。 【提供帮助】:有任何使用问题欢迎随时与我联系,我会及时解答解惑,提供帮助 【附带帮助】:若还需要相关开发工具、学习资料等,我会提供帮助,提供资料,鼓励学习进步 【项目价值】:可用在相关项目设计中,皆可应用在项目、毕业设计、课程设计、期末/期中/大作业、工程实训、大创等学科竞赛比赛、初期项目立项、学习/练手等方面,可借鉴此优质项目实现复刻,设计报告也可借鉴此项目,也可基于此项目来扩展开发出更多功能 下载后请首先打开README文件(如有),项目工程可直接复现复刻,如果基础还行,也可在此程序基础上进行修改,以实现其它功能。供开源学习/技术交流/学习参考,勿用于商业用途。质量优质,放心下载使用。
项目描述:建立购物小商城平台. 实现了前台页面系统。 技术描述:通过Spring 主框架来管理Struts2和Hibernate 框架搭建的电商小平台,用MySQL数据库并创建了表有用户表,订单表,商品表,商品分类表,商品内容表,购物车表等来存储数据。用到hibernate….zip项目工程资源经过严格测试可直接运行成功且功能正常的情况才上传,可轻松复刻,拿到资料包后可轻松复现出一样的项目,本人系统开发经验充足(全领域),有任何使用问题欢迎随时与我联系,我会及时为您解惑,提供帮助。 【资源内容】:包含完整源码+工程文件+说明(如有)等。答辩评审平均分达到96分,放心下载使用!可轻松复现,设计报告也可借鉴此项目,该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的。 【提供帮助】:有任何使用问题欢迎随时与我联系,我会及时解答解惑,提供帮助 【附带帮助】:若还需要相关开发工具、学习资料等,我会提供帮助,提供资料,鼓励学习进步 【项目价值】:可用在相关项目设计中,皆可应用在项目、毕业设计、课程设计、期末/期中/大作业、工程实训、大创等学科竞赛比赛、初期项目立项、学习/练手等方面,可借鉴此优质项目实现复刻,设计报告也可借鉴此项目,也可基于此项目来扩展开发出更多功能 下载后请首先打开README文件(如有),项目工程可直接复现复刻,如果基础还行,也可在此程序基础上进行修改,以实现其它功能。供开源学习/技术交流/学习参考,勿用于商业用途。质量优质,放心下载使用。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值