一.新增节点前的准备工作
1. tar -xzvf os.tar.gz
os.tar.gz:为VMWare操作系统安装完成并修改OS kernel,创建oracle安装用户及群组后的备份
2. 安装asm
安装:
rpm -ivh *.rpm
配置:
/etc/init.d/oracleasm configure
asm配置文件路径: /etc/sysconfig/oracleasm,可修改配置文件对asm进行配置
3. 网络配置
在各个节点的hosts文件中加入下面的内容
10.182.4.41 rac3
#Private
10.182.4.42 rac3-priv
#Virtual
10.182.4.43 rac3-vip
4. ssh信任建立
[oracle@rac3 ~]$ ssh-keygen -t rsa
[oracle@rac3 ~]$ ssh-keygen -t dsa
[oracle@rac3 .ssh]$ cat *.pub > authorized_keys
将新产生的authorized_keys,与原来节点的authorized_keys合并为一个authorized_keys并复制到各个节点
依次在各个节点执行
ssh rac1 date
ssh rac2 date
ssh rac3 date
ssh rac1-priv date
ssh rac2-priv date
ssh rac3-priv date
确认信任已正确建立
二.新增节点
A:安装crs:
1. 确认各个节点,若有节点不在线不能新增节点
2. 在任一节点以oracle用户执行$CRS_HOME/bin目录下的addNode.sh脚本
通过vnc连接到rac1
cd $CRS_HOME/oui/bin
sh addNode.sh
在OUI欢迎界面中选择next
3. Specify Cluster Nodes to Add to Installation
输入:rac3 rac3-priv rac3-vip
选择next进入下一步
4. install
根据提示执行脚本
[root@rac3 ~]# sh /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script. is complete
[root@rac3 ~]# sh /u01/app/oracle/product/10.2.0/crs/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
OCR LOCATIONS = /dev/raw/raw1
OCR backup directory '/u01/app/oracle/product/10.2.0/crs/cdata/crs' does not exist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
rac2
rac3
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
IP address "rac1-vip" has already been used. Enter an unused IP address.
在rac2上以root用户执行vipca完成vip,gsd,ons的配置
5.确认crs安装成功
crsctl check crs
[oracle@rac3 bin] olsnodes
rac1
rac2
rac3
B:安装oracle软件
在节点rac1或rac2上执行$ORACLE_HOME/oui/bin下的addNode.sh
cd $ORACLE_HOME/oui/bin
sh addNode.sh
一路下一步完成oracle软件安装
C:配置lisenter
1.在新节点上执行netca,选择reconfig重新配置lisenter
crs_stat -t –v确认配置成功
D :新增实例
在rac1或rac2上执行dbca选择 Instance Management
选择节点rac3增加实例
ASM is present on the cluster but needs to be extended to the following nodes :[rac3].Do you want ASM to be extended
选择Yes
若出现ASM实例在rac3上无法启动,修正后再继续安装
安装完成后通过crs_stat –t –v 确认亲实例是否启动
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/7419833/viewspace-677982/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/7419833/viewspace-677982/