照着三思老大的虚拟机安装rac手册安装,前面比较顺利,就是遇到些问题也随手解决了,但是在dbca的时候创建asm实例,报了一个错误,困扰了我好久
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:skgxpcini failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: past_inmemor,
crs的状态是正常的
[oracle@node1 bin]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
cluster_interconnect的设置看着也是正常的,但是多了一个virbr0,之前一直没在意
[root@node1 bin]# ./oifcfg getif
eth0 192.168.100.0 global public
eth1 10.10.17.0 global cluster_interconnect
virbr0 192.168.122.0 global cluster_interconnect
后来找了很久,也试了一些办法,总是不见效,后来在metalink上搜到一篇
,和我遇到的问题一模一样,就是多出来的这个virbr0在作怪。
oifcfg delif -global virbr0 之后问题解决
$ORA_CRS_HOME/bin/oifcfg getif shows:
bond0 192.168.253.0 global cluster_interconnect
eth0 192.168.254.0 global cluster_interconnect
eth2 192.168.253.0 global cluster_interconnect
eth4 192.168.253.0 global cluster_interconnect
eth6 160.106.25.100 global public
In this case, eth2 and eth4 are the underlying interface for bond0. eth0 is not related with cluster_interconnect. So only bond0 interface should be used as cluster_interconnect.
$ORA_CRS_HOME/bin/oifcfg delif -global eth0
$ORA_CRS_HOME/bin/oifcfg delif -global eth2
$ORA_CRS_HOME/bin/oifcfg delif -global eth3
After this, $ORA_CRS_HOME/bin/oifcfg getif should show:
bond0 192.168.253.0 global cluster_interconnect
eth6 160.106.25.100 global public
Startup the instance again, it is fine without error.
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:skgxpcini failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: past_inmemor,
crs的状态是正常的
[oracle@node1 bin]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
cluster_interconnect的设置看着也是正常的,但是多了一个virbr0,之前一直没在意
[root@node1 bin]# ./oifcfg getif
eth0 192.168.100.0 global public
eth1 10.10.17.0 global cluster_interconnect
virbr0 192.168.122.0 global cluster_interconnect
后来找了很久,也试了一些办法,总是不见效,后来在metalink上搜到一篇
Doc ID: | 387396.1 |
oifcfg delif -global virbr0 之后问题解决
Cause
There are too many entries for cluster_interconnect in OCR.$ORA_CRS_HOME/bin/oifcfg getif shows:
bond0 192.168.253.0 global cluster_interconnect
eth0 192.168.254.0 global cluster_interconnect
eth2 192.168.253.0 global cluster_interconnect
eth4 192.168.253.0 global cluster_interconnect
eth6 160.106.25.100 global public
In this case, eth2 and eth4 are the underlying interface for bond0. eth0 is not related with cluster_interconnect. So only bond0 interface should be used as cluster_interconnect.
Solution
Run following command to delete extra interface, eg:$ORA_CRS_HOME/bin/oifcfg delif -global eth0
$ORA_CRS_HOME/bin/oifcfg delif -global eth2
$ORA_CRS_HOME/bin/oifcfg delif -global eth3
After this, $ORA_CRS_HOME/bin/oifcfg getif should show:
bond0 192.168.253.0 global cluster_interconnect
eth6 160.106.25.100 global public
Startup the instance again, it is fine without error.
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/13350499/viewspace-600486/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/13350499/viewspace-600486/