#添加新节点,新节点上配置环境变量(/etc/security/limits.conf、/etc/sysctl.conf、/etc/hosts、/home/oracle/.bash_profile、/home/grid/.bash_profile)
#关闭防火墙,selinux,ntpd 并 mv /etc/ntp.conf /etc/ntp.conf.bak
#安装配置oracleasm
##配置grid和oracle用户的ssh互信
##删除节点,存活的节点上root用户下执行
crsctl delete node -n rac02
##OCR中删除重装主机的vip信息,存活节点grid下执行,node2-v代表重装节点的虚拟ip名
srvctl remove vip -i rac02-vip -f
##清除重装主机的gi和db home的inventory信息,存活节点grid下执行,只保留存活节点,rac01为存活节点
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=rac01 CRS=TRUE -silent
##清除重装主机的gi和db home的inventory信息,存活节点oracle下执行,只保留存活节点,rac01为存活节点
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=rac01 -silent -local
##RAC删除节点,被删除节点的操作系统可以访问的情况下执行如下语句
crsctl stop has
rm -fr /etc/oraInst.loc
rm -fr /etc/oratab
rm -fr /etc/oracle/
rm -fr /opt/ORCLfmap/
rm -fr /u01
##验证节点是否删除
cluvfy stage -post nodedel -n rac02
##添加新节点前的验证,rac02代表新加的节点名称
cluvfy stage -pre nodeadd -n rac02 -verbose
##执行添加节点命令,grid用户下执行
export IGNORE_PREADDNODE_CHECKS=Y
$ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac02-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={rac02-priv}"
##重装节点执行
he following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes rac02
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful
###以上是重装节点的grid安装步骤
##oracle软件的安装,存活节点上ORACLE用户下执行
$ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac02}"
----$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac02}” CRS=“false” -silent
/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes rac02
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful
##节点静默删除实例,删除rac02节点中的实例
dbca -silent -deleteInstance -nodeList rac02 -gdbName syntong -instanceName syngong2 -sysDBAUserName sys -sysDBAPassword oracle
##节点静默添加实例,重新添加rac02接节点实例
dbca -silent -addInstance -gdbName syntong -nodelist rac02 -instanceName syngong2 -sysDBAUserName sys -sysDBAPassword oracle
##验证添加的节点,报错如下
cluvfy stage -post nodedel -n rac02
ERROR:
PRVF-10002 : Node "rac02" is not yet deleted from the Oracle inventory node list
Node removal check failed
Check failed on nodes:
rac02
grid@rac01 ~]$ crsctldelete node -n rac02
-bash: crsctldelete: command not found
[grid@rac01 ~]$ crsctl delete node -n rac02
CRS-4563: Insufficient user privileges.
CRS-4000: Command Delete failed, or completed with errors.
[grid@rac01 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac01}” CRS=TRUE -local -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 7989 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[grid@rac01 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=rac01 -silent -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 7989 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[grid@rac01 ~]$ cluvfy stage -post nodedel -n rac02
Performing post-checks for node removal
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Node removal check passed
Post-check for node removal was successful.
PRVG-11134 : Interface "192.168.1.91" on node "rac01" is not able to communicate with interface "192.168.1.92" on node "rac02"
PRVG-11134 : Interface "192.168.1.93" on node "rac01" is not able to communicate with interface "192.168.1.92" on node "rac02"
此错误可忽略
$ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac02-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={rac02-priv}"
#如出现错误PRVF-10209 : VIPs "rac02-vip" are active before Clusterware installation,把rac02-vip地址释放
Configuration of ASM ... failed
see asmca logs at /u01/app/grid/cfgtoollogs/asmca for details
Did not succssfully configure and start ASM at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6912.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
##asm配置失败,系统配置oracleasm后未重启操作系统