运行root.sh 如下
[root@rac2 crs]# ./root.sh
WARNING: directory '/u01' is not owned by rootChecking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)
其实是 因为存储设备和上次安装的设备文件名称不一致
把下面的文件删除在重新运行即可
[root@rac2 crs]# rm /etc/oracle/scls_scr/rac2/oracle/cssfatal
2、dd设备文件
安装rac 在第二个节点一直卡在
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
后来发现是因为voting disk和ocr没有清除干净,oracle给的是:
dd if=/dev/zero of=/dev/raw/raw1 bs=8192 count=12800
3、通常在最后一个节点执行root.sh时会遇到错误
CSS is active on these nodes.
node1
node2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/app/oracle/product/10.2.0/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
出现以下报错,各节点修改以下参数就可以:
[root@node2 bin]# vi /u01/app/oracle/product/10.2.0/crs/bin/vipca
if [ "$arch" = "i686" -o "$arch" = "ia64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
unset LD_ASSUME_KERNEL ------添加的行
#End workaround
;;
[root@node2 bin]# vi /u01/app/oracle/product/10.2.0/crs/bin/srvctl
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL ------添加的行
[root@node1 bin]# vi /u01/app/oracle/product/10.2.0/crs/bin/vipca
if [ "$arch" = "i686" -o "$arch" = "ia64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
unset LD_ASSUME_KERNEL ------添加的行
#End workaround
;;
[root@node1 bin]# vi /u01/app/oracle/product/10.2.0/crs/bin/srvctl
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL ------添加的行
在节点2再次执行下面操作:
[root@node2 bin]# /u01/app/oracle/product/10.2.0/crs/root.sh
4、
5、Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
[root@node2 bin]# ./oifcfg iflist
eth1 10.10.17.0
virbr0 192.168.122.0
eth0 192.168.100.0
[root@node2 bin]# ./oifcfg setif -global eth0/192.168.100.0:public
[root@node2 bin]# ./oifcfg setif -global eth1/10.10.17.0:cluster_interconnect
[root@node2 bin]# ./oifcfg getif
eth0 192.168.100.0 global public
eth1 10.10.17.0 global cluster_interconnect
在节点上重新运行 crs/bin/install/rootdelete.sh
然后在运行 crs/bin/root.sh