ORACLE 10G增加一个节点rac3

1、主机名规划: 
#Public Network - (eth0) 
192.168.10.1   rac1 
192.168.10.2   rac2 
192.168.10.5   rac3       #新增加节点Public IP
#Private Interconnect - (eth1) 
10.0.0.1     rac1-priv 
10.0.0.2     rac2-priv 
10.0.0.3     rac3-priv      #新增加 节点Private IP
#Public Virtual IP (VIP) addresses - (eth0) 
192.168.10.3   rac1-vip 
192.168.10.4   rac2-vip 
192.168.10.6   rac3-vip  #新增加节点Virtual IP
2.配置rac3与rac1、rac2一样的OS环境,包括创建用户,环境变量,SSH验证等。
初始化第3台节点,首先需要对新节点进行适当的配置,以使其能够满足成为RAC环境中一员,此处练习采用在虚拟环境下模拟:
1)基本环境的验证
检查用户和组,如下:

[root@rac3 ~]# id oracle
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba)
[root@rac3 ~]# id nobody
uid=99(nobody) gid=99(nobody) groups=99(nobody)
检查oracle用户环境变量,如下:
[root@rac3 ~]# su - oracle
[oracle@rac3 ~]$ vi .bash_profile
export ORACLE_SID=orcl3     #修改内容
验证目录及权限,如下:
[root@rac3 oracle]# chown -R oracle:oinstall crs/
[root@rac3 oracle]# ll
total 4
drwxr-xr-x 2 oracle oinstall 4096 May 17 18:28 crs
检查内核参数,如下:
[root@rac3 oracle]# vi /etc/sysctl.conf
检查oracle用户的shell限制,如下:
[root@rac3 oracle]# vi /etc/pam.d/login
验证HangCheck计时器,如下:
[root@rac3 oracle]# vi /etc/rc.local
验证分区和绑定的裸设备,如下:
[root@rac3 oracle]# ll /dev/raw/raw*
crw-rw---- 1 oracle dba 162, 1 May 17 18:03 /dev/raw/raw1
crw-rw---- 1 oracle dba 162, 2 May 17 18:03 /dev/raw/raw2
crw-rw---- 1 oracle dba 162, 3 May 17 18:03 /dev/raw/raw3
crw-rw---- 1 oracle dba 162, 4 May 17 18:03 /dev/raw/raw4

2)配置/etc/hosts 如下:
[root@rac3 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
#127.0.0.1              rac1 localhost.localdomain localhost
127.0.0.1       localhost
#::1            localhost6.localdomain6 localhost6
#public ip
192.168.10.1    rac1
192.168.10.2    rac2
192.168.10.5    rac3
#private ip
10.0.0.1        rac1-priv
10.0.0.2        rac2-priv
10.0.0.3        rac3-priv
#virtual ip
192.168.10.3    rac1-vip
192.168.10.4    rac2-vip
192.168.10.6    rac3-vip

注意:这里不仅新增加的节点中hosts文件需要修改,同一个RAC 环境中所有节点的

hosts 文件都必须重新修改。
2)
配置SSH密钥认证

RAC 环境中各节点间不仅时刻保持通讯,而且还有可能互访文件,因此必须要保证各

节点间访问不需输入DBA手动密码即可自动完成,这里我们通过配置SSH 来实现这一点

首先是在新增加的节点时操作,即RAC3 节点(注意执行命令的用户)
[oracle@rac3 .ssh]$ ls
authorized_keys  id_dsa  id_dsa.pub  id_rsa  id_rsa.pub  known_hosts
[oracle@rac3 .ssh]$ rm -rf *
[oracle@rac3 .ssh]$ ll
total 0

[oracle@rac3 ~]$ ssh-keygen -t rsa

[oracle@rac3 ~]$ ssh-keygen -t dsa

然后转至rac1 节点执行,也是以oracle 身份进行操作(执行过程中,当访问远端节点

时可能需要输入目标节点的密码)

[oracle@rac1 ~]$ ssh rac3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'rac3 (192.168.10.5)' can't be established.
RSA key fingerprint is 7c:f4:aa:d2:89:fc:0d:1f:ff:33:07:15:21:97:62:8f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac3,192.168.10.5' (RSA) to the list of known hosts.
oracle@rac3's password:
[oracle@rac1 ~]$ ssh rac3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@rac3's password:

最后传输rac1 节点中配置好的认证密钥信息到节点和节点3,执行命令如下:

[oracle@rac1 ~]$ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
authorized_keys                               100% 2982     2.9KB/s   00:00
[oracle@rac1 ~]$ scp ~/.ssh/authorized_keys rac3:~/.ssh/authorized_keys
oracle@rac3's password:
authorized_keys                               100% 2982     2.9KB/s   00:00

验证双机互信             # 确保有日期返回,并且不需要输入密码
[oracle@rac1 .ssh]$ ssh rac1 date
Sat May 17 16:25:22 CST 2014
[oracle@rac1 .ssh]$ ssh rac2 date
Sat May 17 16:25:30 CST 2014
[oracle@rac1 .ssh]$ ssh rac3 date
Sat May 17 16:25:36 CST 2014
[oracle@rac1 .ssh]$ ssh rac1-priv date
Sat May 17 16:26:11 CST 2014
[oracle@rac1 .ssh]$ ssh rac2-priv date
Sat May 17 16:26:20 CST 2014
在三个节点都要执行;
[oracle@rac1 .ssh]$ exec /usr/bin/ssh-agent $SHELL
[oracle@rac1 .ssh]$ exec /usr/bin/ssh-add                   //查看是否有信息弹出
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
3. 添加 clusterware 到新节点
1)检查安装环境

首先是检查安装环境,仍然是使用runcluvfy.sh 脚本来进行验证,该脚本可以在现有RAC

配置中的任意节点上执行,这里在节点rac执行,如下:
[oracle@rac1 cluvfy]$ pwd
/u01/app/clusterware/cluvfy
[oracle@rac1 cluvfy]$ ls
cvupack.zip  jrepack.zip  runcluvfy.sh

[oracle@rac1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n rac3 -verbose
结尾显示如下信息:正常

2)安装clusterware到新节点

新节点中clusterware 的安装也是从现有的RAC 环境中开始的,在当前RAC 环境中任

意节点的$CRS_HOME,执行oui/bin/addNode.sh 脚本敲出视界界面,操作如下:

rac1中执行:
[root@rac1 ~]# xhost +
[root@rac1 ~]# su - oracle
[oracle@rac1 bin]$ pwd
/u01/app/oracle/crs/oui/bin
[oracle@rac1 bin]$ ls
addLangs.sh  attachHome.sh  lsnodes    ouica.sh  runConfig.sh  runInstaller.sh
addNode.sh   detachHome.sh  ouica.bat  resource  runInstaller

[oracle@rac1 bin]$ ./addNode.sh
出现如下欢迎界面:

添加的节点信息如下:

按照提示,按顺序执行相应脚本,如下:


[root@rac1 ~]# /u01/app/oracle/crs/install/rootaddnode.sh
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 3: rac3 rac3-priv rac3
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/u01/app/oracle/crs/bin/srvctl add nodeapps -n rac3 -A rac3-vip/255.255.255.0/eth0 -o /u01/app/oracle/crs

[root@ rac3 app]# /u01/app/oracle/crs/root.sh
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
OCR LOCATIONS =  /dev/raw/raw1
OCR backup directory '/u01/app/oracle/crs/cdata/crs' does not exist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
        rac3
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps


Creating VIP application resource on (0) nodes.
Creating GSD application resource on (0) nodes.
Creating ONS application resource on (0) nodes.
Starting VIP application resource on (2) nodes1:CRS-0233: Resource or relatives are currently involved with another operation.
Check the log file "/u01/app/oracle/crs/log/rac1/racg/ora.rac1.vip.log" for more details
.1:CRS-0233: Resource or relatives are currently involved with another operation.
Check the log file "/u01/app/oracle/crs/log/rac2/racg/ora.rac2.vip.log" for more details
..
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...

Done.
脚本执行完成之后,点击下一步,出现如下安装完成界面:

查看crs状态:
[oracle@rac2 ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host
----------------------------------------------------------------------
ora.orcl.db    application    0/0    0/1    ONLINE    ONLINE    rac3
ora....l1.inst application    0/5    0/0    ONLINE    ONLINE    rac1
ora....l2.inst application    0/5    0/0    ONLINE    ONLINE    rac2
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1
ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    0/5    0/0    ONLINE    ONLINE    rac1
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1
ora.rac1.vip   application    0/0    0/0    ONLINE    ONLINE    rac1
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2
ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    0/5    0/0    ONLINE    ONLINE    rac2
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2
ora.rac2.vip   application    0/0    0/0    ONLINE    ONLINE    rac2
ora.rac3.gsd   application    0/5    0/0    ONLINE    ONLINE    rac3
ora.rac3.ons   application    0/3    0/0    ONLINE    ONLINE    rac3
ora.rac3.vip   application    0/0    0/0    ONLINE    ONLINE    rac1
3) 接下来需要将新节点的 ONS(Oracle Notification Services) 配置信息写入 OCR(Oracle  Cluster Register) ,在节点rac 执行脚本如下:
首先查看第三个rac3节点对应的端口,如下:
[oracle@rac3 conf]$ pwd
/u01/app/oracle/crs/opmn/conf
[oracle@rac3 conf]$ cat ons.config
localport=6113
remoteport=6200
loglevel=3
useocr=on
在节点rac1上执行如下脚本:
[oracle@rac1 bin]$ pwd
/u01/app/oracle/crs/bin
[oracle@rac1 bin]$ ./racgons add_config rac3:6200

至此,新节点的CLUSTERWARE 配置完成,要检查安装的结果,可以在新节点中调用cluvfy 命令进行验证,如下:
[oracle@rac3 bin]$ pwd
/u01/app/oracle/crs/bin
[oracle@rac3 bin]$ ./cluvfy stage -post crsinst -n rac3 -verbose

Performing post-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "rac3"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac3                                  yes
Result: Node reachability check passed from node "rac3".

Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  rac3                                  passed
Result: User equivalence check passed for user "oracle".

Checking Cluster manager integrity...

Checking CSS daemon...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac3                                  running
Result: Daemon status check passed for "CSS daemon".

Cluster manager integrity check passed.

Checking cluster integrity...

  Node Name
  ------------------------------------
  rac1
  rac2
  rac3


Cluster integrity check passed

Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.

Uniqueness check for OCR device passed.

Checking the version of OCR...
OCR of correct Version "2" exists.

Checking data integrity of OCR...

ERROR:
OCR integrity is invalid.

OCR integrity check failed.

Checking CRS integrity...

Checking daemon liveness...

Check: Liveness for "CRS daemon"
  Node Name                             Running
  ------------------------------------  ------------------------
  rac3                                  yes
Result: Liveness check passed for "CRS daemon".

Checking daemon liveness...

Check: Liveness for "CSS daemon"
  Node Name                             Running
  ------------------------------------  ------------------------
  rac3                                  yes
Result: Liveness check passed for "CSS daemon".

Checking daemon liveness...

Check: Liveness for "EVM daemon"
  Node Name                             Running
  ------------------------------------  ------------------------
  rac3                                  yes
Result: Liveness check passed for "EVM daemon".

Liveness of all the daemons
  Node Name     CRS daemon                CSS daemon                EVM daemon
  ------------  ------------------------  ------------------------  ----------
  rac3          yes                       yes                       yes

Checking CRS health...

Check: Health of CRS
  Node Name                             CRS OK?
  ------------------------------------  ------------------------
  rac3                                  yes
Result: CRS health check passed.

CRS integrity check passed.

Checking node application existence...

Checking existence of VIP node application
  Node Name     Required                  Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  rac3          yes                       exists                    passed
Result: Check passed.


Checking existence of ONS node application
  Node Name     Required                  Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  rac3          no                        exists                    passed
Result: Check passed.

Checking existence of GSD node application
  Node Name     Required                  Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  rac3          no                        exists                    passed
Result: Check passed.

Post-check for cluster services setup was unsuccessful on all the nodes.    #看到此节点,说明添加成功
4.复制oracle软件到新节点rac3

接下来要复制ORACLE 数据库软件到新节点,复制操作可以在现的RAC 环境中的任意节点中开始,这里我们选择在rac12节点上操作:
[oracle@rac2 bin]$ pwd
/u01/app/oracle/db_1/oui/bin
[oracle@rac2 bin]$ ./addNode.sh

出现如下欢迎界面:

添加第三个节点,如下:

如下安装信息会将第二个节点安装的oracle软件信息同步到新添加的节点上:

出现如下信息,按提示执行脚本:


[root@rac3 ~]# /u01/app/oracle/db_1/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
5.在新节点rac3配置监听:

rac2执行netca,选择rac3,添加监听
只选择在节点rac3上创建监听,如下:

查看监听状态,如下:
[oracle@rac3 ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host
----------------------------------------------------------------------
ora.orcl.db    application    0/0    0/1    ONLINE    ONLINE    rac3
ora....l1.inst application    0/5    0/0    ONLINE    ONLINE    rac1
ora....l2.inst application    0/5    0/0    ONLINE    ONLINE    rac2
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1
ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    0/5    0/0    ONLINE    ONLINE    rac1
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1
ora.rac1.vip   application    0/0    0/0    ONLINE    ONLINE    rac1
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2
ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    0/5    0/0    ONLINE    ONLINE    rac2
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2
ora.rac2.vip   application    0/0    0/0    ONLINE    ONLINE    rac2
ora....C3.lsnr application    0/5    0/0    ONLINE    ONLINE    rac3
ora.rac3.gsd   application    0/5    0/0    ONLINE    ONLINE    rac3
ora.rac3.ons   application    0/3    0/0    ONLINE    ONLINE    rac3
ora.rac3.vip   application    0/0    0/0    ONLINE    ONLINE    rac3
6.添加实例到新节点rac3,如下:
[oracle@rac1 ~]$ dbca
选择Oracle Real Application Clusters database,如下:


选择实例管理,如下:

选择添加一个实例,如下:

在当前正在运行的 数据库上增加实例,如下:

下面显示当前数据库下的两个实例,如下:

选择默认实例,点击下一步,如下:

下面显示Instance Storge信息,如下:

在此期间,ORACLE 开始自动在新节点上创建实例,并且会视需要提示创建ASM 相关实例(如果使用了ASM 做存储的话,点击yes)


7.验证信息:
[oracle@rac1 ~]$ export ORACLE_SID=orcl1
[oracle@rac1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Sat May 17 22:06:55 2014

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> select  INST_ID, INSTANCE_NAME,STATUS from gv$instance;

   INST_ID INSTANCE_NAME    STATUS
---------- ---------------- ------------
         1 orcl1            OPEN
         3 orcl3            OPEN
         2 orcl2            OPEN

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/29634949/viewspace-1163338/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/29634949/viewspace-1163338/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值