步骤1 安装操作系统
操作系统要求和现有的集群节点的系统一样,不可以跨系统。系统版本可以不同,但最好保持一致。
步骤2 网络配置
配置存储网络、私有网络和公共网络
步骤3 存储配置
将共享存储挂载到新节点的操作系统中。
步骤4 系统配置
1) 安装必要的软件包
2) 修改系统参数
3) 修改host文件 (注意也要同步修改其他节点的host文件)
4) 创建用户和组
5) 环境变量设置
6) 配置SSH用户等效性 (主要配置集群操作节点与新添加节点之间grid用户和oracle用户的等效性)。
7) CTSS准备 (关闭NTP服务,重命名/etc/ntp.conf文件,最好先手动同步一次节点时间)
8) 安装cuvqdisk包
9) 安装ASMLib包,使用“oracleasm scandisks” 扫描ASM磁盘。
10) 使用CVU验证添加的节点是否满足要求
验证前确保创建了grid和oracle用户的等效性,在现有集群任一节点的grid和oracle用户下执行以下命令验证添加的节点是否满足Grid Infrastructure和Database软件的要求:
[grid@rac11g2~]$ cluvfy stage -pre nodeadd -n rac11g3 -verbose
Performingpre-checks for node addition
Checking nodereachability...
Check: Nodereachability from node "rac11g2"
Destination Node Reachable?
------------------------------------ ------------------------
rac11g3 yes
Result: Nodereachability check passed from node "rac11g2"
Checking userequivalence...
Check: Userequivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rac11g3 passed
Result: Userequivalence check passed for user "grid"
Checking CRSintegrity...
Clusterwareversion consistency passed
The OracleClusterware is healthy on node "rac11g1"
The OracleClusterware is healthy on node "rac11g2"
CRS integritycheck passed
Checking shared resources...
… … …. ….
Checking tomake sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
rac11g3 passed does notexist
rac11g2 passed does notexist
Result: User"grid" is not part of "root" group. Check passed
Checkingconsistency of file "/etc/resolv.conf" across nodes
Checking thefile "/etc/resolv.conf" to make sure only one of domain and searchentries is defined
File"/etc/resolv.conf" does not have both domain and search entriesdefined
Checking ifdomain entry in file "/etc/resolv.conf" is consistent across thenodes...
domain entryin file "/etc/resolv.conf" is consistent across nodes
Checking ifsearch entry in file "/etc/resolv.conf" is consistent across thenodes...
search entryin file "/etc/resolv.conf" is consistent across nodes
Checking DNSresponse time for an unreachable node
Node Name Status
------------------------------------ ------------------------
rac11g2 failed
rac11g3 failed
PRVF-5636 : The DNS response time for an unreachable nodeexceeded "15000" ms on following nodes: rac11g2,rac11g3
File"/etc/resolv.conf" is not consistent across nodes
Checkingintegrity of name service switch configuration file"/etc/nsswitch.conf" ...
Checking if"hosts" entry in file "/etc/nsswitch.conf" is consistentacross nodes...
Checking file"/etc/nsswitch.conf" to make sure that only one "hosts"entry is defined
More than one"hosts" entry does not exist in any "/etc/nsswitch.conf"file
Check forintegrity of name service switch configuration file"/etc/nsswitch.conf" passed
Pre-check for node addition was unsuccessful on all the nodes.
添加节点要先给节点添加Clusterware集群软件,在添加Clusterware的过程中ASM软件默认也会自动添加。接着是添加RAC Database软件,最后是为新节点添加实例。
1添加新节点Clusterware软件
添加前首先要求完成服务器的配置工作,确保有正确的$GRID_HOME路径,$ORACLE_HOME环境变量设置正确,执行以下步骤完成新节点的Clusterware软件的添加:
步骤1 确保CVU验证通过
步骤2 执行以下命令将添加新节点Clusterware软件 (在现有集群节点的grid用户执行):
如果我们不使用DNS解析域名方式的话,那么resolv.conf不一致的问题可以忽略
在rac1或rac2上进入/u01/app/11.2.0/grid/oui/bin目录下,执行以下命令:
[grid@rac11g1bin]$ cd /u01/app/11.2.0/grid/oui/bin
[grid@rac11g1bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[grid@rac11g1bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac11g3}""CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac11g3-vip}"
Starting Oracle UniversalInstaller...
Checking swap space: must begreater than 500 MB. Actual 3451MB Passed
Oracle Universal Installer,Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013,Oracle. All rights reserved.
Performing tests to see whethernodes rac11g2,rac11g3 are available
...............................................................100% Done.
..
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/app/11.2.0/grid
New Nodes
Space Requirements
New Nodes
rac11g3
/:Required 5.62GB : Available 30.94GB
Installed Products
ProductNames
OracleGrid Infrastructure 11g 11.2.0.4.0
JavaDevelopment Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
OracleOne-Off Patch Installer 11.2.0.3.4
… ….…. ….
Oracle Net Listener 11.2.0.4.0
ClusterReady Services Files 11.2.0.4.0
OracleDatabase 11g 11.2.0.4.0
-----------------------------------------------------------------------------
Instantiating scripts for addnode (Saturday, March 14, 2015 7:13:49 AM CST)
. 1% Done.
Instantiation of add nodescripts complete
Copying to remote nodes(Saturday, March 14, 2015 7:13:57 AM CST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes(Saturday, March 14, 2015 7:45:33 AM CST)
. 100% Done.
Save inventory complete
WARNING:A new inventory hasbeen created on one or more nodes in this session. However, it has not yet beenregistered as the central inventory of this system.
To register the new inventoryplease run the script at '/u01/app/oraInventory/orainstRoot.sh' with rootprivileges on nodes 'rac11g3'.
If you do not register theinventory, you may not be able to update or patch the products you installed.
The following configurationscripts need to be executed as the "root" user in each new clusternode. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh#On nodes rac11g3
/u01/app/11.2.0/grid/root.sh#On nodes rac11g3
To execute the configurationscripts:
1. Opena terminal window
2. Login as "root"
3.Run the scripts in each cluster node
The Cluster Node Addition of/u01/app/11.2.0/grid was successful.
Please check'/tmp/silentInstall.log' for more details.
步骤3 上一步执行成功之后,在新节点以root用户身份运行以下两个脚本:
[root@rac11g3oraInventory]# cd /u01/app/oraInventory
[root@rac11g3oraInventory]# ./orainstRoot.sh
Creating the Oracle inventorypointer file (/etc/oraInst.loc)
Changing permissions of/u01/app/oraInventory.
Adding read,write permissionsfor group.
Removing read,write,executepermissions for world.
Changing groupname of/u01/app/oraInventory to oinstall.
The execution of the script iscomplete.
[root@rac11g3oraInventory]# cd /u01/app/11.2.0/grid/
[root@rac11g3 grid]# ./root.sh
Performing root user operationfor Oracle 11g
The following environmentvariables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of thelocal bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin...
Creating /etc/oratab file...
Entries will be added to the/etc/oratab file as needed by
Database ConfigurationAssistant when a database is created
Finished running generic partof root script.
Now product-specific rootactions will be performed.
Using configuration parameterfile: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisitesduring installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries toupstart
CRS-4402: The CSS daemon wasstarted in exclusive mode but found an active CSS daemon on node rac11g1,number 1, and is terminating
An active cluster was foundduring exclusive startup, restarting to join the cluster
clscfg: EXISTING configurationversion 5 detected.
clscfg: version 5 is 11gRelease 2.
Successfully accumulatednecessary OCR keys.
Creating OCR keys for user'root', privgrp 'root'..
Operation successful.
Configure Oracle GridInfrastructure for a Cluster ... succeeded
成功执行以上3步,新的节点Clusterware添加完成
[grid@rac11g1bin]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.CRS.dg ora....up.type ONLINE ONLINE rac11g1
ora.DATA.dg ora....up.type ONLINE ONLINE rac11g1
ora.FRA.dg ora....up.type ONLINE ONLINE rac11g1
ora....ER.lsnr ora....er.typeONLINE ONLINE rac11g1
ora....N1.lsnr ora....er.typeONLINE ONLINE rac11g2
ora.asm ora.asm.type ONLINE ONLINE rac11g1
ora.cvu ora.cvu.type ONLINE ONLINE rac11g2
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac11g1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac11g2
ora.ons ora.ons.type ONLINE ONLINE rac11g1
ora.orcl.db ora....se.type ONLINE ONLINE rac11g1
ora....SM1.asm application ONLINE ONLINE rac11g1
ora....G1.lsnr application ONLINE ONLINE rac11g1
ora....1g1.gsd application OFFLINE OFFLINE
ora....1g1.ons application ONLINE ONLINE rac11g1
ora....1g1.vipora....t1.type ONLINE ONLINE rac11g1
ora....SM2.asm application ONLINE ONLINE rac11g2
ora....G2.lsnr application ONLINE ONLINE rac11g2
ora....1g2.gsd application OFFLINE OFFLINE
ora....1g2.ons application ONLINE ONLINE rac11g2
ora....1g2.vip ora....t1.type ONLINE ONLINE rac11g2
ora....SM3.asm application ONLINE ONLINE rac11g3
ora....G3.lsnr application ONLINE ONLINE rac11g3
ora....1g3.gsd application OFFLINE OFFLINE
ora....1g3.ons application ONLINE ONLINE rac11g3
ora....1g3.vip ora....t1.type ONLINE ONLINE rac11g3
ora....ry.acfs ora....fs.type ONLINE ONLINE rac11g1
ora.scan1.vip ora....ip.type ONLINE ONLINE rac11g2
2添加新节点RACDatabase软件
执行以下步骤完成RACDatabase软件的添加:
步骤1 为新节点添加Database软件 (在现有集群节点以oracle用户执行):
[root@rac11g1 bin]# cd /u01/app/oracle/product/11.2.0/db_1/oui/bin
[oracle@rac11g1 bin]$./addNode.sh -silent "CLUSTER_NEW_NODES={rac11g3}"
Performing pre-checks for nodeaddition
Checking node reachability...
Node reachability check passedfrom node "rac11g1"
Checking user equivalence...
User equivalence check passedfor user "oracle"
WARNING:
Node "rac11g3"already appears to be part of cluster
Pre-check for node addition wassuccessful.
Starting Oracle UniversalInstaller...
Checking swap space: must begreater than 500 MB. Actual 3332MB Passed
Oracle Universal Installer,Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013,Oracle. All rights reserved.
Performing tests to see whethernodes rac11g2,rac11g3 are available
...............................................................100% Done.
..
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/app/oracle/product/11.2.0/db_1
New Nodes
Space Requirements
New Nodes
rac11g3
/: Required 4.31GB : Available 26.70GB
Installed Products
Product Names
Oracle Database 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
。。。。。。。。。。。。
Oracle OLAP 11.2.0.4.0
Oracle Spatial 11.2.0.4.0
Oracle Partitioning 11.2.0.4.0
Enterprise Edition Options 11.2.0.4.0
-----------------------------------------------------------------------------
Instantiating scripts for addnode (Saturday, March 14, 2015 8:16:03 AM CST)
. 1% Done.
Instantiation of add nodescripts complete
Copying to remote nodes(Saturday, March 14, 2015 8:16:10 AM CST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes(Saturday, March 14, 2015 8:25:23 AM CST)
. 100%Done.
Save inventory complete
WARNING:
The following configurationscripts need to be executed as the "root" user in each new clusternode. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11.2.0/db_1/root.sh#On nodes rac11g3
To execute the configurationscripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of/u01/app/oracle/product/11.2.0/db_1 was successful.
Please check'/tmp/silentInstall.log' for more details.
步骤2 上一步完成之后,在新的节点以root用户身份运行$ORACLE_HOME/root.sh
[root@rac11g3 grid]# cd/u01/app/oracle/product/11.2.0/db_1/
[root@rac11g3 db_1]# ./root.sh
Performing root user operationfor Oracle 11g
The following environmentvariables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1
Enter the full pathname of thelocal bin directory: [/usr/local/bin]:
The contents of"dbhome" have not changed. No need to overwrite.
The contents of"oraenv" have not changed. No need to overwrite.
The contents of"coraenv" have not changed. No need to overwrite.
Entries will be added to the/etc/oratab file as needed by
Database ConfigurationAssistant when a database is created
Finished running generic partof root script.
Now product-specific rootactions will be performed.
Finished product-specific rootactions.
完成以上步骤之后,新节点添加RAC Database就完成了。
步骤3 CVU验证
在现有集群节点或新节点,在grid和oracle用户下执行以下命令验证Clusterware和Database软件是否添加正确:
[grid@rac11g3 ~]$ cluvfy stage-post nodeadd -n rac11g3 –verbose
3 添加新节点数据库实例
当新节点的Clusterware和RAC Database都添加成功之后,就可以为节点添加RAC实例
使用dbca工具执行以下命令,以静默模式添加新节点数据库实例(在现有集群节点以oracle用户执行):
[oracle@rac11g1 bin]$ dbca-silent -addInstance -nodeList "rac11g3" -gdbName "orcl"-instanceName "orcl3" -sysDBAUserName "s"[oracle@rac11g1bin]$ dbca -silent -addInstance -nodeList "rac11g3" -gdbName"orcl" -instanceName "orcl3" -sysDBAUserName"sys" -sysDBAPassword oracle
Adding instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
66% complete
Completing instance management.
76% complete
100% complete
Look at the log file"/u01/app/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details.
[root@rac11g3 db_1]# su -oracle
[oracle@rac11g3 ~]$ sqlplus /as sysdba
SQL*Plus: Release 11.2.0.4.0Production on Sat Mar 14 08:59:04 2015
Copyright (c) 1982, 2013,Oracle. All rights reserved.
Connected to:
Oracle Database 11g EnterpriseEdition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, RealApplication Clusters, Automatic Storage Management, OLAP,
Data Mining and RealApplication Testing options
SQL> select open_mode fromv$database;
OPEN_MODE
--------------------
READ WRITE
SQL>
SQL> selectinstance_number,instance_name,status from gv$instance;
INSTANCE_NUMBERINSTANCE_NAME STATUS
------------------------------- ------------
3 orcl3 OPEN
2 orcl2 OPEN
1 orcl1 OPEN
以上就是整个RAC节点的添加过程。