11gR2 11.2.0.3 RAC addNode
现在有11gR2 11.2.0.3 双节点RAC环境一套,需要增加一个节点。当前两个rac节点的基本配置如下:
# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6
#ADD 192.168.20.21 rac1 10.0.0.20 rac1-priv 192.168.20.20 rac1-vip
192.168.20.22 rac2 10.0.0.21 rac2-priv 192.168.20.23 rac2-vip
192.168.20.25 rac-scan
[root@rac1 oracle]# crsctl status resource -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.DBFS.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.LISTENER.lsnr ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.RECO.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr
ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.szscdb.db 1 ONLINE ONLINE rac1 Open 2 ONLINE ONLINE rac2 Open |
计划加入节点3之后的hosts文件如下:
# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6
#ADD 192.168.20.21 rac1 10.0.0.20 rac1-priv 192.168.20.20 rac1-vip
192.168.20.22 rac2 10.0.0.21 rac2-priv 192.168.20.23 rac2-vip
192.168.20.24 rac3 10.0.0.22 rac3-priv 192.168.20.26 rac3-vip
192.168.20.25 rac-scan |
1.安装节点3 OS
要求操作系统的版本与rac已存在的两个节点完全一致。
具体过程略过。
2.系统配置
该部分的配置与11gR2 11.2.0.3 RAC安装配置完全相同,具体过程略过,可参考之前rac搭建的文档。
这里需要注意的是三个节点之间实现ssh互信的时候有点麻烦
在 rac1,rac2 上分别执行 #su – root ssh rac3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ssh rac3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys scp ~/.ssh/authorized_keys rac3:~/.ssh/authorized_keys #su - oracle ssh rac3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ssh rac3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys scp ~/.ssh/authorized_keys rac3:~/.ssh/authorized_keys #su - grid ssh rac3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ssh rac3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
在每个节点上测试连接(两个节点都要确认) #su - ssh rac1 date ssh rac2 date ssh rac3 date ssh rac1-priv date ssh rac2-priv date ssh rac3-priv date #su - oracle ssh rac1 date ssh rac2 date ssh rac3 date ssh rac1-priv date ssh rac2-priv date ssh rac3-priv date #su - grid ssh rac1 date ssh rac2 date ssh rac3 date ssh rac1-priv date ssh rac2-priv date ssh rac3-priv date |
同时切勿忘记安装rpm包cvuqdisk
[root@rac3 ~]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
[root@rac3 ~]# rpm -ivh /root/cvuqdisk-1.0.9-1.rpm Preparing... ########################################### [100%] 1:cvuqdisk ########################################### [100%]
|
3.共享存储的实现
实现的方式同11gR2 11.2.0.3 RAC安装,具体过程略过。
4.节点扩展前检查
4.1检查共享存储
grid@rac1:/home/grid>cluvfy stage -post hwos -n rac3 -verbose
Performing post-checks for hardware and operating system setup
Checking node reachability...
Check: Node reachability from node "rac1" Destination Node Reachable? ------------------------------------ ------------------------ rac3 yes Result: Node reachability check passed from node "rac1"
Checking user equivalence...
Check: User equivalence for user "grid" Node Name Status ------------------------------------ ------------------------ rac3 passed Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file... Node Name Status ------------------------------------ ------------------------ rac3 passed
Verification of the hosts config file successful
Interface information for node "rac3" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.20.24 192.168.20.0 0.0.0.0 192.168.20.254 00:50:56:94:00:11 1500 eth1 10.0.0.22 10.0.0.0 0.0.0.0 192.168.20.254 00:50:56:94:00:12 1500
Check: Node connectivity of subnet "192.168.20.0" Result: Node connectivity passed for subnet "192.168.20.0" with node(s) rac3
Check: TCP connectivity of subnet "192.168.20.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1:192.168.20.21 rac3:192.168.20.24 passed Result: TCP connectivity check passed for subnet "192.168.20.0"
Check: Node connectivity of subnet "10.0.0.0" Result: Node connectivity passed for subnet "10.0.0.0" with node(s) rac3
Check: TCP connectivity of subnet "10.0.0.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1:192.168.20.21 rac3:10.0.0.22 passed Result: TCP connectivity check passed for subnet "10.0.0.0"
Interfaces found on subnet "192.168.20.0" that are likely candidates for VIP are: rac3 eth0:192.168.20.24
Interfaces found on subnet "10.0.0.0" that are likely candidates for a private interconnect are: rac3 eth1:10.0.0.22
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking for multiple users with UID value 0 Result: Check for multiple users with UID value 0 passed Check: Time zone consistency Result: Time zone consistency check passed
Checking shared storage accessibility...
Disk Sharing Nodes (1 in count) ------------------------------------ ------------------------ /dev/sda rac3
Disk Sharing Nodes (1 in count) ------------------------------------ ------------------------ /dev/sdb rac3
Disk Sharing Nodes (1 in count) ------------------------------------ ------------------------ /dev/sdc rac3
Disk Sharing Nodes (1 in count) ------------------------------------ ------------------------ /dev/sdd rac3
Disk Sharing Nodes (1 in count) ------------------------------------ ------------------------ /dev/sde rac3
Disk Sharing Nodes (1 in count) ------------------------------------ ------------------------ /dev/sdf rac3
Shared storage check was successful on nodes "rac3"
Post-check for hardware and operating system setup was successful. |
4.2检查节点3的系统配置
grid@rac1:/home/grid>cluvfy stage -pre nodeadd -n rac3
Performing pre-checks for node addition
Checking node reachability... Node reachability check passed from node "rac1"
Checking user equivalence... User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" TCP connectivity check passed for subnet "192.168.20.0"
Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.20.0". Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking shared resources...
Checking CRS home location... "/u01/app/11.2.0/grid" is shared Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" TCP connectivity check passed for subnet "192.168.20.0"
Check: Node connectivity for interface "eth1" Node connectivity passed for interface "eth1" TCP connectivity check passed for subnet "10.0.0.0"
Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.20.0". Subnet mask consistency check passed for subnet "10.0.0.0". Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed. Total memory check passed Available memory check passed Swap space check passed Free disk space check failed for "rac1:/u01/app/11.2.0/grid" Check failed on nodes: rac1 Free disk space check passed for "rac3:/u01/app/11.2.0/grid" Free disk space check failed for "rac1:/tmp" Check failed on nodes: rac1 Free disk space check passed for "rac3:/tmp" Check for multiple users with UID value 500 passed User existence check passed for "grid" Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make" Package existence check passed for "binutils" Package existence check passed for "gcc(x86_64)" Package existence check passed for "libaio(x86_64)" Package existence check passed for "glibc(x86_64)" Package existence check passed for "compat-libstdc++-33(x86_64)" Package existence check passed for "elfutils-libelf(x86_64)" Package existence check passed for "elfutils-libelf-devel" Package existence check passed for "glibc-common" Package existence check passed for "glibc-devel(x86_64)" Package existence check passed for "glibc-headers" Package existence check passed for "gcc-c++(x86_64)" Package existence check passed for "libaio-devel(x86_64)" Package existence check passed for "libgcc(x86_64)" Package existence check passed for "libstdc++(x86_64)" Package existence check passed for "libstdc++-devel(x86_64)" Package existence check passed for "sysstat" Package existence check passed for "ksh" Check for multiple users with UID value 0 passed Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Checking OCR integrity...
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started... No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
User "grid" is not part of "root" group. Check passed Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined domain entry in file "/etc/resolv.conf" is consistent across nodes search entry in file "/etc/resolv.conf" is consistent across nodes All nodes have one search entry defined in file "/etc/resolv.conf" PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac1
File "/etc/resolv.conf" is not consistent across nodes
Pre-check for node addition was successful. |
5.扩展grid infrastructure
5.1调用addNode.sh
grid@rac1:/home/grid>cd $ORACLE_HOME/oui/bin
grid@rac1:/u01/app/11.2.0/grid/oui/bin>./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={rac3-priv}" > /tmp/addNode.log
Performing pre-checks for node addition
Checking node reachability... Node reachability check passed from node "rac1"
Checking user equivalence... User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" TCP connectivity check passed for subnet "192.168.20.0"
Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.20.0". Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking shared resources...
Checking CRS home location... "/u01/app/11.2.0/grid" is shared Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" TCP connectivity check passed for subnet "192.168.20.0"
Check: Node connectivity for interface "eth1" Node connectivity passed for interface "eth1" TCP connectivity check passed for subnet "10.0.0.0"
Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.20.0". Subnet mask consistency check passed for subnet "10.0.0.0". Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed. Total memory check passed Available memory check passed Swap space check passed Free disk space check failed for "rac1:/u01/app/11.2.0/grid" Check failed on nodes: rac1 Free disk space check passed for "rac3:/u01/app/11.2.0/grid" Free disk space check failed for "rac1:/tmp" Check failed on nodes: rac1 Free disk space check passed for "rac3:/tmp" Check for multiple users with UID value 500 passed User existence check passed for "grid" Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make" Package existence check passed for "binutils" Package existence check passed for "gcc(x86_64)" Package existence check passed for "libaio(x86_64)" Package existence check passed for "glibc(x86_64)" Package existence check passed for "compat-libstdc++-33(x86_64)" Package existence check passed for "elfutils-libelf(x86_64)" Package existence check passed for "elfutils-libelf-devel" Package existence check passed for "glibc-common" Package existence check passed for "glibc-devel(x86_64)" Package existence check passed for "glibc-headers" Package existence check passed for "gcc-c++(x86_64)" Package existence check passed for "libaio-devel(x86_64)" Package existence check passed for "libgcc(x86_64)" Package existence check passed for "libstdc++(x86_64)" Package existence check passed for "libstdc++-devel(x86_64)" Package existence check passed for "sysstat" Package existence check passed for "ksh" Check for multiple users with UID value 0 passed Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Checking OCR integrity...
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started... No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
User "grid" is not part of "root" group. Check passed Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined domain entry in file "/etc/resolv.conf" is consistent across nodes search entry in file "/etc/resolv.conf" is consistent across nodes All nodes have one search entry defined in file "/etc/resolv.conf" PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac1
File "/etc/resolv.conf" is not consistent across nodes
Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed.
Pre-check for node addition was successful. Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2816 MB Passed Oracle Universal Installer, Version 11.2.0.3.0 Production Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes rac2,rac3 are available ............................................................... 100% Done.
. ----------------------------------------------------------------------------- Cluster Node Addition Summary Global Settings Source: /u01/app/11.2.0/grid New Nodes Space Requirements New Nodes rac3 /u01: Required 4.61GB : Available 8.82GB Installed Products Product Names Oracle Grid Infrastructure 11.2.0.3.0 Sun JDK 1.5.0.30.03 Installer SDK Component 11.2.0.3.0 Oracle One-Off Patch Installer 11.2.0.1.7 Oracle Universal Installer 11.2.0.3.0 Oracle USM Deconfiguration 11.2.0.3.0 Oracle Configuration Manager Deconfiguration 10.3.1.0.0 Enterprise Manager Common Core Files 10.2.0.4.4 Oracle DBCA Deconfiguration 11.2.0.3.0 Oracle RAC Deconfiguration 11.2.0.3.0 Oracle Quality of Service Management (Server) 11.2.0.3.0 Installation Plugin Files 11.2.0.3.0 Universal Storage Manager Files 11.2.0.3.0 Oracle Text Required Support Files 11.2.0.3.0 Automatic Storage Management Assistant 11.2.0.3.0 Oracle Database 11g Multimedia Files 11.2.0.3.0 Oracle Multimedia Java Advanced Imaging 11.2.0.3.0 Oracle Globalization Support 11.2.0.3.0 Oracle Multimedia Locator RDBMS Files 11.2.0.3.0 Oracle Core Required Support Files 11.2.0.3.0 Bali Share 1.1.18.0.0 Oracle Database Deconfiguration 11.2.0.3.0 Oracle Quality of Service Management (Client) 11.2.0.3.0 Expat libraries 2.0.1.0.1 Oracle Containers for Java 11.2.0.3.0 Perl Modules 5.10.0.0.1 Secure Socket Layer 11.2.0.3.0 Oracle JDBC/OCI Instant Client 11.2.0.3.0 Oracle Multimedia Client Option 11.2.0.3.0 LDAP Required Support Files 11.2.0.3.0 Character Set Migration Utility 11.2.0.3.0 Perl Interpreter 5.10.0.0.2 PL/SQL Embedded Gateway 11.2.0.3.0 OLAP SQL Scripts 11.2.0.3.0 Database SQL Scripts 11.2.0.3.0 Oracle Extended Windowing Toolkit 3.4.47.0.0 SSL Required Support Files for InstantClient 11.2.0.3.0 SQL*Plus Files for Instant Client 11.2.0.3.0 Oracle Net Required Support Files 11.2.0.3.0 Oracle Database User Interface 2.2.13.0.0 RDBMS Required Support Files for Instant Client 11.2.0.3.0 RDBMS Required Support Files Runtime 11.2.0.3.0 XML Parser for Java 11.2.0.3.0 Oracle Security Developer Tools 11.2.0.3.0 Oracle Wallet Manager 11.2.0.3.0 Enterprise Manager plugin Common Files 11.2.0.3.0 Platform. Required Support Files 11.2.0.3.0 Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 RDBMS Required Support Files 11.2.0.3.0 Oracle Ice Browser 5.2.3.6.0 Oracle Help For Java 4.2.9.0.0 Enterprise Manager Common Files 10.2.0.4.3 Deinstallation Tool 11.2.0.3.0 Oracle Java Client 11.2.0.3.0 Cluster Verification Utility Files 11.2.0.3.0 Oracle Notification Service (eONS) 11.2.0.3.0 Oracle LDAP administration 11.2.0.3.0 Cluster Verification Utility Common Files 11.2.0.3.0 Oracle Clusterware RDBMS Files 11.2.0.3.0 Oracle Locale Builder 11.2.0.3.0 Oracle Globalization Support 11.2.0.3.0 Buildtools Common Files 11.2.0.3.0 Oracle RAC Required Support Files-HAS 11.2.0.3.0 SQL*Plus Required Support Files 11.2.0.3.0 XDK Required Support Files 11.2.0.3.0 Agent Required Support Files 10.2.0.4.3 Parser Generator Required Support Files 11.2.0.3.0 Precompiler Required Support Files 11.2.0.3.0 Installation Common Files 11.2.0.3.0 Required Support Files 11.2.0.3.0 Oracle JDBC/THIN Interfaces 11.2.0.3.0 Oracle Multimedia Locator 11.2.0.3.0 Oracle Multimedia 11.2.0.3.0 HAS Common Files 11.2.0.3.0 Assistant Common Files 11.2.0.3.0 PL/SQL 11.2.0.3.0 HAS Files for DB 11.2.0.3.0 Oracle Recovery Manager 11.2.0.3.0 Oracle Database Utilities 11.2.0.3.0 Oracle Notification Service 11.2.0.3.0 SQL*Plus 11.2.0.3.0 Oracle Netca Client 11.2.0.3.0 Oracle Net 11.2.0.3.0 Oracle JVM 11.2.0.3.0 Oracle Internet Directory Client 11.2.0.3.0 Oracle Net Listener 11.2.0.3.0 Cluster Ready Services Files 11.2.0.3.0 Oracle Database 11g 11.2.0.3.0 -----------------------------------------------------------------------------
Instantiating scripts for add node (Monday, August 19, 2013 4:17:53 PM CST) . 1% Done. Instantiation of add node scripts complete
Copying to remote nodes (Monday, August 19, 2013 4:17:58 PM CST) ............................................................................................... 96% Done. Home copied to new nodes
Saving inventory on nodes (Monday, August 19, 2013 4:22:02 PM CST) . 100% Done. Save inventory complete WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system. To register the new inventory please run the script. at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'rac3'. If you do not register the inventory, you may not be able to update or patch the products you installed. The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script. in the list below is followed by a list of nodes. /u01/app/oraInventory/orainstRoot.sh #On nodes rac3 /u01/app/11.2.0/grid/root.sh #On nodes rac3 To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/11.2.0/grid was successful. Please check '/tmp/silentInstall.log' for more details. |
5.2运行脚本
[root@rac3 ~]# /u01/app/oraInventory/orainstRoot.sh Creating the Oracle inventory pointer file (/etc/oraInst.loc) Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script. is complete. [root@rac3 ~]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g
The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation OLR initialization - successful Adding Clusterware entries to inittab CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Configure Oracle Grid Infrastructure for a Cluster ... succeeded |
5.3 检查添加状态
grid@rac1:/tmp>cluvfy stage -post nodeadd -n rac1,rac2,rac3
Performing post-checks for node addition
Checking node reachability... Node reachability check passed from node "rac1"
Checking user equivalence... User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" TCP connectivity check passed for subnet "192.168.20.0"
Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.20.0". Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking cluster integrity...
Cluster integrity check passed
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking shared resources...
Checking CRS home location... "/u01/app/11.2.0/grid" is not shared Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" TCP connectivity check passed for subnet "192.168.20.0"
Check: Node connectivity for interface "eth1" Node connectivity passed for interface "eth1" TCP connectivity check passed for subnet "10.0.0.0"
Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.20.0". Subnet mask consistency check passed for subnet "10.0.0.0". Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.20.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking node application existence...
Checking existence of VIP node application (required) VIP node application check passed
Checking existence of NETWORK node application (required) NETWORK node application check passed
Checking existence of GSD node application (optional) GSD node application is offline on nodes "rac2,rac1,rac3"
Checking existence of ONS node application (optional) ONS node application check passed
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners... TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "rac-scan"...
ERROR: PRVG-1101 : SCAN name "rac-scan" failed to resolve
ERROR: PRVF-4657 : Name resolution setup check for "rac-scan" (IP address: 192.168.20.25) failed
ERROR: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan" ###注:这里的报错可以忽略
Verification of SCAN VIP and Listener setup failed
User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes... Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes... CTSS resource check passed
Querying CTSS for time offset on all nodes... Query of CTSS for time offset passed
Check CTSS state started... CTSS is in Active state. Proceeding with check of clock time offsets on all nodes... Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Post-check for node addition was unsuccessful on all the nodes. |
6.扩展oracle软件
oracle@rac1:/home/oracle>cd $ORACLE_HOME/oui/bin oracle@rac1:/u02/app/oracle/products/11.2.0/oui/bin>./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" > /home/oracle/addNode.log
Performing pre-checks for node addition
Checking node reachability... Node reachability check passed from node "rac1"
Checking user equivalence... User equivalence check passed for user "oracle"
WARNING: Node "rac3" already appears to be part of cluster
Pre-check for node addition was successful. Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2853 MB Passed Oracle Universal Installer, Version 11.2.0.3.0 Production Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes rac2,rac3 are available ............................................................... 100% Done.
. ----------------------------------------------------------------------------- Cluster Node Addition Summary Global Settings Source: /u02/app/oracle/products/11.2.0 New Nodes Space Requirements New Nodes rac3 /: Required 5.04GB : Available 5.77GB Installed Products Product Names Oracle Database 11g 11.2.0.3.0 Sun JDK 1.5.0.30.03 Installer SDK Component 11.2.0.3.0 Oracle One-Off Patch Installer 11.2.0.1.7 Oracle Universal Installer 11.2.0.3.0 Oracle USM Deconfiguration 11.2.0.3.0 Oracle Configuration Manager Deconfiguration 10.3.1.0.0 Oracle DBCA Deconfiguration 11.2.0.3.0 Oracle RAC Deconfiguration 11.2.0.3.0 Oracle Database Deconfiguration 11.2.0.3.0 Oracle Configuration Manager Client 10.3.2.1.0 Oracle Configuration Manager 10.3.5.0.1 Oracle ODBC Driverfor Instant Client 11.2.0.3.0 LDAP Required Support Files 11.2.0.3.0 SSL Required Support Files for InstantClient 11.2.0.3.0 Bali Share 1.1.18.0.0 Oracle Extended Windowing Toolkit 3.4.47.0.0 Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 Oracle Real Application Testing 11.2.0.3.0 Oracle Database Vault J2EE Application 11.2.0.3.0 Oracle Label Security 11.2.0.3.0 Oracle Data Mining RDBMS Files 11.2.0.3.0 Oracle OLAP RDBMS Files 11.2.0.3.0 Oracle OLAP API 11.2.0.3.0 Platform. Required Support Files 11.2.0.3.0 Oracle Database Vault option 11.2.0.3.0 Oracle RAC Required Support Files-HAS 11.2.0.3.0 SQL*Plus Required Support Files 11.2.0.3.0 Oracle Display Fonts 9.0.2.0.0 Oracle Ice Browser 5.2.3.6.0 Oracle JDBC Server Support Package 11.2.0.3.0 Oracle SQL Developer 11.2.0.3.0 Oracle Application Express 11.2.0.3.0 XDK Required Support Files 11.2.0.3.0 RDBMS Required Support Files for Instant Client 11.2.0.3.0 SQLJ Runtime 11.2.0.3.0 Database Workspace Manager 11.2.0.3.0 RDBMS Required Support Files Runtime 11.2.0.3.0 Oracle Globalization Support 11.2.0.3.0 Exadata Storage Server 11.2.0.1.0 Provisioning Advisor Framework 10.2.0.4.3 Enterprise Manager Database Plugin -- Repository Support 11.2.0.3.0 Enterprise Manager Repository Core Files 10.2.0.4.4 Enterprise Manager Database Plugin -- Agent Support 11.2.0.3.0 Enterprise Manager Grid Control Core Files 10.2.0.4.4 Enterprise Manager Common Core Files 10.2.0.4.4 Enterprise Manager Agent Core Files 10.2.0.4.4 RDBMS Required Support Files 11.2.0.3.0 regexp 2.1.9.0.0 Agent Required Support Files 10.2.0.4.3 Oracle 11g Warehouse Builder Required Files 11.2.0.3.0 Oracle Notification Service (eONS) 11.2.0.3.0 Oracle Text Required Support Files 11.2.0.3.0 Parser Generator Required Support Files 11.2.0.3.0 Oracle Database 11g Multimedia Files 11.2.0.3.0 Oracle Multimedia Java Advanced Imaging 11.2.0.3.0 Oracle Multimedia Annotator 11.2.0.3.0 Oracle JDBC/OCI Instant Client 11.2.0.3.0 Oracle Multimedia Locator RDBMS Files 11.2.0.3.0 Precompiler Required Support Files 11.2.0.3.0 Oracle Core Required Support Files 11.2.0.3.0 Sample Schema Data 11.2.0.3.0 Oracle Starter Database 11.2.0.3.0 Oracle Message Gateway Common Files 11.2.0.3.0 Oracle XML Query 11.2.0.3.0 XML Parser for Oracle JVM 11.2.0.3.0 Oracle Help For Java 4.2.9.0.0 Installation Plugin Files 11.2.0.3.0 Enterprise Manager Common Files 10.2.0.4.3 Expat libraries 2.0.1.0.1 Deinstallation Tool 11.2.0.3.0 Oracle Quality of Service Management (Client) 11.2.0.3.0 Perl Modules 5.10.0.0.1 JAccelerator (COMPANION) 11.2.0.3.0 Oracle Containers for Java 11.2.0.3.0 Perl Interpreter 5.10.0.0.2 Oracle Net Required Support Files 11.2.0.3.0 Secure Socket Layer 11.2.0.3.0 Oracle Universal Connection Pool 11.2.0.3.0 Oracle JDBC/THIN Interfaces 11.2.0.3.0 Oracle Multimedia Client Option 11.2.0.3.0 Oracle Java Client 11.2.0.3.0 Character Set Migration Utility 11.2.0.3.0 Oracle Code Editor 1.2.1.0.0I PL/SQL Embedded Gateway 11.2.0.3.0 OLAP SQL Scripts 11.2.0.3.0 Database SQL Scripts 11.2.0.3.0 Oracle Locale Builder 11.2.0.3.0 Oracle Globalization Support 11.2.0.3.0 SQL*Plus Files for Instant Client 11.2.0.3.0 Required Support Files 11.2.0.3.0 Oracle Database User Interface 2.2.13.0.0 Oracle ODBC Driver 11.2.0.3.0 Oracle Notification Service 11.2.0.3.0 XML Parser for Java 11.2.0.3.0 Oracle Security Developer Tools 11.2.0.3.0 Oracle Wallet Manager 11.2.0.3.0 Cluster Verification Utility Common Files 11.2.0.3.0 Oracle Clusterware RDBMS Files 11.2.0.3.0 Oracle UIX 2.2.24.6.0 Enterprise Manager plugin Common Files 11.2.0.3.0 HAS Common Files 11.2.0.3.0 Precompiler Common Files 11.2.0.3.0 Installation Common Files 11.2.0.3.0 Oracle Help for the Web 2.0.14.0.0 Oracle LDAP administration 11.2.0.3.0 Buildtools Common Files 11.2.0.3.0 Assistant Common Files 11.2.0.3.0 Oracle Recovery Manager 11.2.0.3.0 PL/SQL 11.2.0.3.0 Generic Connectivity Common Files 11.2.0.3.0 Oracle Database Gateway for ODBC 11.2.0.3.0 Oracle Programmer 11.2.0.3.0 Oracle Database Utilities 11.2.0.3.0 Enterprise Manager Agent 10.2.0.4.3 SQL*Plus 11.2.0.3.0 Oracle Netca Client 11.2.0.3.0 Oracle Multimedia Locator 11.2.0.3.0 Oracle Call Interface (OCI) 11.2.0.3.0 Oracle Multimedia 11.2.0.3.0 Oracle Net 11.2.0.3.0 Oracle XML Development Kit 11.2.0.3.0 Database Configuration and Upgrade Assistants 11.2.0.3.0 Oracle JVM 11.2.0.3.0 Oracle Advanced Security 11.2.0.3.0 Oracle Internet Directory Client 11.2.0.3.0 Oracle Enterprise Manager Console DB 11.2.0.3.0 HAS Files for DB 11.2.0.3.0 Oracle Net Listener 11.2.0.3.0 Oracle Text 11.2.0.3.0 Oracle Net Services 11.2.0.3.0 Oracle Database 11g 11.2.0.3.0 Oracle OLAP 11.2.0.3.0 Oracle Spatial 11.2.0.3.0 Oracle Partitioning 11.2.0.3.0 Enterprise Edition Options 11.2.0.3.0 -----------------------------------------------------------------------------
Instantiating scripts for add node (Tuesday, August 20, 2013 9:35:07 AM CST) . 1% Done. Instantiation of add node scripts complete
Copying to remote nodes (Tuesday, August 20, 2013 9:35:12 AM CST) ............................................................................................... 96% Done. Home copied to new nodes
Saving inventory on nodes (Tuesday, August 20, 2013 9:44:31 AM CST) . 100% Done. Save inventory complete WARNING: The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script. in the list below is followed by a list of nodes. /u02/app/oracle/products/11.2.0/root.sh #On nodes rac3 To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts in each cluster node
The Cluster Node Addition of /u02/app/oracle/products/11.2.0 was successful. Please check '/tmp/silentInstall.log' for more details. |
7.创建实例
8.状态检查
[root@rac3 ~]# crsctl status resource -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.DBFS.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.LISTENER.lsnr ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.RECO.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ONLINE ONLINE rac3 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 OFFLINE OFFLINE rac3 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac3 ora.cvu 1 ONLINE ONLINE rac3 ora.oc4j 1 OFFLINE OFFLINE ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.rac3.vip 1 ONLINE ONLINE rac3 ora.scan1.vip 1 ONLINE ONLINE rac3 ora.szscdb.db 1 ONLINE ONLINE rac1 Open 2 ONLINE ONLINE rac2 Open 3 ONLINE ONLINE rac3 Open
SQL> select INSTANCE_NUMBER,INSTANCE_NAME,STARTUP_TIME FROM GV$INSTANCE;
INSTANCE_NUMBER INSTANCE_NAME STARTUP_T --------------- ---------------- --------- 3 szscdb3 20-AUG-13 2 szscdb2 20-AUG-13 1 szscdb1 20-AUG-13 |
By GuanghuiZhou
Tue Aug 20 2013 IN SUZHOU
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/26169542/viewspace-768871/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/26169542/viewspace-768871/