Solaris裸设备安装三节点RAC102(二)

利用裸设备安装Solaris上的三节点Oracle 10.2 RAC

这一篇主要讨论ORACLECLUSTERWARE的安装。

Solaris裸设备安装三节点RAC102(一): http://yangtingkun.itpub.net/post/468/512772

 

 

在上一篇文章中已经将操作系统准备完毕。这篇文章介绍使用ORACLECLUSTERWARE来安装RAC环境。

Oraclecluster安装文件解压,利用cpio idmv < 10gr2_cluster_sol.cpio命令展开。然后进入展开目录,进入cluvfy目录执行下面的检测命令:

$ cd cluster_disk
$ cd cluvfy

bash-2.03$ ./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2,racnode3

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "racnode1".


Checking user equivalence...
User equivalence check passed for user "oracle".

Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as Primary] passed.

Administrative privileges check passed.

Checking node connectivity...

Node connectivity check passed for subnet "172.25.0.0" with node(s) racnode3,racnode2,racnode1.
Node connectivity check passed for subnet "10.0.0.0" with node(s) racnode3,racnode2,racnode1.

Suitable interfaces for the private interconnect on subnet "172.25.0.0":
racnode3 ce0:172.25.198.226
racnode2 ce0:172.25.198.223
racnode1 ce0:172.25.198.222

ERROR:
Could not find a suitable set of interfaces for VIPs.

Node connectivity check failed.


Checking system requirements for 'crs'...
Total memory check passed.
Free disk space check passed.
Swap space check failed.
Check failed on nodes:
        racnode3,racnode2,racnode1
System architecture check passed.
Operating system version check failed.
Check failed on nodes:
        racnode3
Package existence check passed for "SUNWarc".
Package existence check passed for "SUNWbtool".
Package existence check passed for "SUNWhea".
Package existence check passed for "SUNWlibm".
Package existence check passed for "SUNWlibms".
Package existence check passed for "SUNWsprot".
Package existence check passed for "SUNWsprox".
Package existence check passed for "SUNWtoo".
Package existence check passed for "SUNWi1of".
Package existence check passed for "SUNWi1cs".
Package existence check passed for "SUNWi15cs".
Package existence check passed for "SUNWxwfnt".
Package existence check passed for "SUNWlibC".
Package existence check failed for "SUNWscucm:3.1".
Check failed on nodes:
        racnode3,racnode2,racnode1
Package existence check failed for "SUNWudlmr:3.1".
Check failed on nodes:
        racnode3,racnode2,racnode1
Package existence check failed for "SUNWudlm:3.1".
Check failed on nodes:
        racnode3,racnode2,racnode1
Package existence check failed for "ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant".
Check failed on nodes:
        racnode3,racnode2,racnode1
Package existence check failed for "SUNWscr:3.1".
Check failed on nodes:
        racnode3,racnode2,racnode1
Package existence check failed for "SUNWscu:3.1".
Check failed on nodes:
        racnode3,racnode2,racnode1
Group existence check passed for "dba".
Group existence check passed for "oinstall".
User existence check passed for "oracle".
User existence check passed for "nobody".

System requirement failed for 'crs'

Pre-check for cluster services setup was unsuccessful on all the nodes.

其中VIP的错误原因已经在以前的文章中专门描述过了,这里就不重复了。而swap空间不足的错误可以忽略,在上一篇文章中已经进行了检查,系统中有足够的swap空间。下面在对一些系统包进行检查时失败,这些包是和SunCluster有关的包,由于安装RAC准备使用OracleClusterware,因此这些错误也可以忽略。至于操作系统版本检查的问题,是因为racnode3racnode1racnode2版本不统一。这个问题也可以忽略。

在安装之前,首先配置Clusterware所需的共享存储。将存储上划分的共享空间分为了多个裸设备,这些裸设备可以在/dev/rdsk目录下看到。

由于三台测试服务器的光纤卡不同,因此加载的裸设备名称也不相同,在racnoce1上设备的名称是:

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0
          /pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cfd99114,0
       1. c2t3d0
          /pci@8,700000/QLGC,qla@3/sd@3,0
       2. c2t3d1
          /pci@8,700000/QLGC,qla@3/sd@3,1
       3. c2t3d2
          /pci@8,700000/QLGC,qla@3/sd@3,2
       4. c2t3d3
          /pci@8,700000/QLGC,qla@3/sd@3,3
       5. c2t3d4
          /pci@8,700000/QLGC,qla@3/sd@3,4
       6. c2t3d5
          /pci@8,700000/QLGC,qla@3/sd@3,5
       7. c2t3d6
          /pci@8,700000/QLGC,qla@3/sd@3,6
       8. c2t3d7
          /pci@8,700000/QLGC,qla@3/sd@3,7
       9. c2t3d8
          /pci@8,700000/QLGC,qla@3/sd@3,8
      10. c2t3d9
          /pci@8,700000/QLGC,qla@3/sd@3,9
      11. c2t3d10
          /pci@8,700000/QLGC,qla@3/sd@3,a
      12. c2t3d11
          /pci@8,700000/QLGC,qla@3/sd@3,b
      13. c2t3d12
          /pci@8,700000/QLGC,qla@3/sd@3,c
      14. c2t3d13
          /pci@8,700000/QLGC,qla@3/sd@3,d

而在racnode2racnode3上,加载的设备名称相似:

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0
          /pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cfd9e4b8,0
       1. c1t1d0
          /pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cfd9ead5,0
       2. c2t500601603022E66Ad0
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,0
       3. c2t500601603022E66Ad1
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,1
       4. c2t500601603022E66Ad2
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,2
       5. c2t500601603022E66Ad3
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,3
       6. c2t500601603022E66Ad4
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,4
       7. c2t500601603022E66Ad5
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,5
       8. c2t500601603022E66Ad6
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,6
       9. c2t500601603022E66Ad7
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,7

由于安装过程中,在建立ocrvot共享磁盘时,需要两个服务器具有相同的名称,因此建立如下的链接。在racnode1上:

# mkdir /dev/rac
# ln -s -f /dev/rdsk/c2t3d2s1 /dev/rac/ocr
# ln -s -f /dev/rdsk/c2t3d2s3 /dev/rac/vot
# chown oracle:oinstall /dev/rdsk/c2t0d2s1
# chown oracle:oinstall /dev/rdsk/c2t0d2s3

racnode2上:

# mkdir /dev/rac
# ln -s -f /dev/rdsk/c2t500601603022E66Ad2s1 /dev/rac/ocr
# ln -s -f /dev/rdsk/c2t500601603022E66Ad2s3 /dev/rac/vot
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad2s1
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad2s3

racnode3上:

# mkdir /dev/rac
# ln -s -f /dev/rdsk/c1t500601603022E66Ad2s1 /dev/rac/ocr
# ln -s -f /dev/rdsk/c1t500601603022E66Ad2s3 /dev/rac/vot
# chown oracle:oinstall /dev/rdsk/c1t500601603022E66Ad2s1
# chown oracle:oinstall /dev/rdsk/c1t500601603022E66Ad2s3

注意不要使用s0作为共享裸设备,否则在安装完成后执行root.sh文件时会出现Failed to upgrade Oracle Cluster Registry configuration的错误信息。

下面可以开始安装了,启动Xmanager,登陆racnode1执行:

# xhost +
access control disabled, clients can connect from any host
 su - oracle
Sun Microsystems Inc.   SunOS 5.8       Generic Patch   October 2001
$ cd /data/cluster_disk
$ ./runInstaller

启动图形界面后,点击next。这时Oracle会提示输入inventory路径和操作系统组信息:默认的就是刚才建立的/data/oracle/oraInventory目录和oinstall,点击next

下面Oracle会提示OraCrs10g_home1目录的路径,这里默认是ORACLE_HOME的路径:/data/oracle/product/10.2/database将其修改为/data/oracle/product/10.2/crs,并且选择简体中文语句,点击next

下面Oracle会自动检测系统是否满足安装需要,如果根据上面一篇文章中的内容进行了设置,这里的检查成功,然后进入下一步。

进入Cluster的配置,默认的Cluster Namecrs,可以修改也可以采用默认设置。Oracle会自动将安装节点的网络配置列出来,这里需要手工将racnode2racnode3的节点信息:racnode2racnode2-privracnode2-vipracnode3racnode3-privracnode3-vip添加进去。点击next

下面会列出可用的网卡信息,检查配置的PUBLICPRIVATE配置是否和hosts文件中的一致。由于当前系统配置172.25开头的ipOraclebug会认为这个ipPrivate IP,因此,会将两个网卡的属性都设置为PRIVATE,这里需要手工的将子网为172.25.0.0的网卡设置为PUBLIC。修改之后,点击next

进入ocr的配置阶段。由于ocr使用的共享磁盘来自存储,本身已经采用了raid0的配置,所以这里选择External Redundancy,并在OCR Location的位置输入刚才设置好的/dev/rac/ocr,然后点击next

进入Voting Disk配置,出于同样的原因选择External Redundancy,并在Voting Disk Location的位置输入配置好的/dev/rac/vot,点击next

出现汇总也,点击install开始安装。

安装完毕需要分别在两个节点用root先后执行两个脚本。前后在racnode1racnode2racnode3上执行下面的脚本,结果是一样的。

# . /data/oracle/oraInventory/orainstRoot.sh
Changing permissions of /data/oracle/oraInventory to 770.
Changing groupname of /data/oracle/oraInventory to oinstall.
The execution of the script. is complete

对于第二个脚本,racnode1上执行:

# /data/oracle/product/10.2/crs/root.sh
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
ln: cannot create /data/oracle/product/10.2/crs/lib/libskgxn2.so: File exists
ln: cannot create /data/oracle/product/10.2/crs/lib32/libskgxn2.so: File exists
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
node 3: racnode3 racnode3-priv racnode3
Creating OCR keys for user 'root', privgrp 'other'..
Operation successful.
Now formatting voting device: /dev/rac/vot
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        racnode1
CSS is inactive on these nodes.
        racnode2
        racnode3
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

racnode2上执行:

# /data/oracle/product/10.2/crs/root.sh
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
node 3: racnode3 racnode3-priv racnode3
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        racnode1
        racnode2
CSS is inactive on these nodes.
        racnode3
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

最后在racnode3上运行脚本:

# /data/oracle/product/10.2/crs/root.sh
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
node 3: racnode3 racnode3-priv racnode3
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        racnode1
        racnode2
        racnode3
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "ce0" is not public. Public interfaces should be used to configure virtual IPs.

脚本显示出现了错误。这个错误就是前面提到了多次的Oraclebug,将PUBLICinterface认为是private的,导致无法配置vip。这个问题的详细描述会在最后一篇文章问题汇总中详细描述,这里只给出解决办法。

最简单的方法是启动vipca图形界面手头配置:

# cd /data/oracle/product/10.2/crs/bin/
# ./vipca

Xmanager中启动一个终端,输入上述命令,启动vipca图形界面。点击next,出现所有可用的网络接口,由于ce0配置的是PUBLIC INTERFACT,这里选择ce0,点击next,在出现的配置中IP Alias Name分别填入:racnode1-vipracnode2-vipracnode3-vipIP address处填入:172.25.198.224172.25.198.225172.25.198.227。这里配置是正确的,那么填完一个IPOracle会自动将剩下六个配置补齐。点击next,出现汇总页面,检查无误后,点击Finish

Oracle会执行6个步骤,Create VIP application resourceCreate GSD application resourceCreate ONS application resourceStart VIP application resourceStart GSD application resourceStart ONS application resource

全部成功后点击OK,结束VIPCA的配置。

这个时候可以返回到刚才的Clusterware安装界面,点击OK

这个时候Oracle会尝试启动两个工具并最终运行一下验证程序。全部检查完成,跳到安装结束画面,点击Exit结束Clusterware的安装。

 

 

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/4227/viewspace-686424/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/4227/viewspace-686424/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值