Install 11.2.0.3(RAC) Grid Infrastructure on linux x86-64, run root.sh on second node failed
/u01/app/oracle/ cfgtoollogs/asmca 下log:
...... ORA-15032: not all alterations performed ORA-15017: diskgroup "DATA" cannot be mounted ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DATA" ORA-15080: synchronous I/O operation to a disk failed ...... ...... ORA-15018: diskgroup cannot be created ORA-15072: command requires at least 1 regular failure groups, discovered only 0 ORA-15080: synchronous I/O operation to a disk failed ...... ...... [main] [ 2013-12-25 22:26:02.987 CST ] [UsmcaLogger.logException:175] oracle.sysman.assistants.util.sqlEngine.SQLFatalErrorException: ORA-15018: diskgroup cannot be created ORA-15072: command requires at least 1 regular failure groups, discovered only 0 ORA-15080: synchronous I/O operation to a disk failed |
参考了:
11GR2 GRID INFRASTRUCTURE INSTALLATION FAILS WHEN RUNNING ROOT.SH ON NODE 2 OF RAC USING ASMLIB[ID 1059847.1 ]
While installing Oracle Grid Infrastructure with ASM, root.sh ran successfully in first node, but fails on the second node. Error example 1. root.sh failed on second node with following errors ------------------------------------------------------- DiskGroup DATA1 creation failed with the following message: ORA-15018: diskgroup cannot be created ORA-15072: command requires at least 1 regular failure groups, discovered only 0 Configuration of ASM failed, see logs for details 2. rootcrs_nodename.log ----------------------- 2010-02-03 13:40:43: Configuring ASM via ASMCA 2010-02-03 13:40:43: Executing as oracle: /u01/app/1120/grid/bin/asmca -silent -diskGroupName DATA1 -diskList ORCL:DATA1 -redundancy EXTERNAL -configureLocalASM 2010-02-03 13:40:43: Running as user oracle: /u01/app/1120/grid/bin/asmca -silent -diskGroupName DATA1 -diskList ORCL:DATA1 -redundancy EXTERNAL -configureLocalASM 2010-02-03 13:40:43: Invoking "/u01/app/1120/grid/bin/asmca -silent -diskGroupName DATA1 -diskList ORCL:DATA1 -redundancy EXTERNAL -configureLocalASM" as user "oracle" 2010-02-03 13:40:51: Configuration of ASM failed, see logs for details 3. On the 2nd node /etc/oratab files shows +ASM1, rather than +ASM2 4. The following commands on the 2nd node show the ASM disk information correctly /etc/init.d/oracleasm listdisks /etc/init.d/oracleasm scandisks ls -ltr /dev/oracleasm/disks Cause |
事实上 这时 asmlib 会重启失败,因为是安装GI已经是在最后一步,故可以选择重启机器(目的是为了让ORACLEASM 重新扫描asm盘),
重启后,2号机需要先删除crs信息,再重新跑root.sh脚本
/u01/app/11.2.0/grid/crs/install./roothas.pl -deconfig -force -verbose ..... 成功删除后,运行root.sh [root@NOPPLORC14 grid]# ./root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params User ignored Prerequisites during installation OLR initialization - successful Adding Clusterware entries to inittab CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node nopplorc13, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster Preparing packages for installation... cvuqdisk-1.0.9-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded |
后续,如果重启后发现节点1的asm 消失,则1号机只能重启,同上做一次重新执行root.sh的操作。
ASMLIB (版本oracleasmlib-2.0.4-1) 本来是10g推出的好工具,内核版本理论上应该是符合安装要求的,对此仍存疑问,后续会继续跟踪。