最近想给自己的RAC系统扩充一个节点,官网上看很简单,下面是官网地址:
http://docs.oracle.com/cd/E11882_01/rac.112/e41959/adddelclusterware.htm#CWADD90989
首先是硬件环境介绍,如下是本次实验的环境情况
服务器环境 |
||
序号 |
名称 |
版本 |
1 |
Oracle Linux |
Enterprise-R6-U8-Server-x86_64 |
2 |
Grid Insfrastructure |
112040_Linux-x86-64 |
3 |
Oracle 11g |
112040_Linux-x86-64 |
以下是我的网络配置,本次没有用到DNS服务器,均在/etc/hosts 文件中写死了配置,所以cluster01-scan只有一组,建议实际配置为三组
网络配置 |
|||||
主机名称 |
网卡名称 |
地址类型 |
IP地址 |
备注 |
域 |
host01 |
eth0 |
公网 |
10.0.1.101 |
example.com |
|
eth1 |
私网 |
192.168.56.101 |
host01-priv |
无 |
|
vip |
虚拟网卡 |
10.0.1.105 |
host01-vip |
|
|
host02 |
eth0 |
公网 |
10.0.1.102 |
example.com |
|
eth1 |
私网 |
192.168.56.102 |
host02-priv |
无 |
|
vip |
虚拟网卡 |
10.0.1.106 |
host02-vip |
|
|
host03 |
eth0 |
公网 |
10.0.1.103 |
新增 |
example.com |
eth1 |
私网 |
192.168.56.103 |
host03-priv |
|
|
vip |
虚拟网卡 |
10.0.1.107 |
host03-vip |
|
|
cluster01-scan |
虚拟地址 |
公网 |
10.0.1.201 |
scan地址 |
|
以下是每一台主机的/etc/hosts,保证新增的主机/etc/hosts文件和其他两台一致,开始的时候ORACLE_HOME=/u01/app/11.2.0/grid
[grid@host01 bin]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.1.101 host01.example.com host01
10.0.1.102 host02.example.com host02
10.0.1.103 host03.example.com host03
192.168.56.101 host01-priv
192.168.56.102 host02-priv
192.168.56.103 host03-priv
10.0.1.105 host01-vip
10.0.1.106 host02-vip
10.0.1.107 host03-vip
10.0.1.201 cluster01-scan.example.com cluster01-scan
#10.0.1.202 cluster01-scan.example.com cluster01-scan
#10.0.1.203 cluster01-scan.example.com cluster01-scan
以上是环境的准备工作,第三台服务器host03的搭建过程,这里略过,但是有点要注意的是防火墙一定要关闭
[root@host03 ~]# service iptables stop
[root@host03 ~]# chkconfig iptables off
如果不关闭可能在安装的时候报
PRVF-7617的错误
1、在host01执行并生成校验准备命令
[grid@host01 bin]$ cluvfy stage -pre nodeadd -n host03 -fixup
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "host01"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking shared resources...
Checking CRS home location...
"/u01/app/11.2.0/grid" is shared
Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "10.0.1.0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.56.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.0.1.0".
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "10.0.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "host03:/u01/app/11.2.0/grid,host03:/tmp"
Free disk space check passed for "host01:/u01/app/11.2.0/grid,host01:/tmp"
Check for multiple users with UID value 501 pass