Oracle 10g RAC安装文档
目录
3.1 编辑文件 /etc/redhat-release 修改为... 37
3.3 编辑文件 /etc/sysctl.conf 加入以下内容,然后 sysctl -p 使他生效... 38
3.4 编辑文件 /etc/security/limits.conf 加入以下内容... 38
3.5 编辑文件 /etc/pam.d/login 加入以下内容... 38
3.6 编辑文件 /etc/profile 加入以下内容... 38
开始前的说明:
安装oracle rac相对比较复杂,本文档不可能提现出安装过程中的每一个步骤的细节。
本文档是在虚拟机中安装rac来写作的,真实环境只需要去掉1和2两步,其他全无差别,因为前面两步就是在模拟双节点主机和共享磁盘阵列。
还是就是生成环境中磁盘大小划分请根据实际情况来。
1. 前期准备
1.1 下载所需要的软件
虚拟机软件:vmware server1.0.10
Linux操作系统:rhel-server-5.4-i386-dvd.iso
oracle集群软件:10201_clusterware_linux32.zip
oracle 数据库软件:10201_database_linux32.zip
以上软件都可以从网上下载到。
1.2 建立存放虚拟机的目录
节点1:D:\vm\rac_rhel5.4\rac1
节点2:D:\vm\rac_rhel5.4\rac2
共享磁盘:D:\vm\rac_rhel5.4\sharedisk
因为实验环境是虚拟机安装rac,所以需要这个目录。
1.3 IP地址规划
hostname | rac1 | rac2 |
公共IP(eth-0) | 192.168.20.200 | 192.168.20.201 |
虚拟IP(eth-0) | 192.168.20.210 | 192.168.20.211 |
私有IP(eth-1) | 10.10.10.100 | 10.10.10.101 |
ORACLE_SID | orcl1 | orcl2 |
根据自己的实际地址来,我实验的ip是这样的。
1.4 存储磁盘规划
用途 | 磁盘名 | 磁盘分区 | 大小 |
存放集群注册表ocr | /dev/sdb | /dev/sdb1 /dev/sdb2 | 400M |
表决磁盘voting disk | /dev/sdc | /dev/sdc1 /dev/sdc2 /dev/sdc3 | 600M |
存放数据文件data file | /dev/sdd | /dev/sdd1 | 5G |
/dev/sde | /dev/sde1 | 5G | |
闪回区falsh area | /dev/sdf | /dev/sdf1 | 5G |
存放备份文件backup | /dev/sdg | /dev/sdg1 | 5G |
实验环境的磁盘大小,生产环境根据实际情况来。Ocr和voting disk生产环境中一般需要多路镜像,这里也如此。
2. 模拟RAC环境
2.1 创建虚拟机
按ctrl+N或者打开File è New è Virtual Machine 新建一个虚拟机:
单击“下一步”
定制虚拟机,选择“Custom”,然后单击“下一步”
因为vmware server 1.0.10还不支持redhat5.4,所以这里选择“Linux”和“Red Hat Linux”,然后单击“下一步”
选择前面创建的存放虚拟机的路径,并可以给虚拟机取名,然后单击“下一步”
默认配置,不影响虚拟机的使用,然后单击“下一步”
默认配置,然后单击“下一步”
默认配置,然后单击“下一步”
分配虚拟机内存大小,然后单击“下一步”
默认配置,然后单击“下一步”
选择“LSILogic”,然后单击“下一步”
默认配置,马上创建一个新磁盘,然后单击“下一步”
默认配置,然后单击“下一步”
创建磁盘,用来安装Linux系统,把“Allocate all disk space now”的勾去掉,然后单击“下一步”
给本地磁盘取名,然后单击“完成”。至此虚拟机“rac1”就创建完成了。你看见如下视图:
2.2 添加网卡
点击“Edit virtual machinesettings”
点击“Add…”
单击“下一步”
选择“Ethernet Adapter”,然后单击“下一步”
选择“Host-only”,注意“Connect at power on”一定要勾选上,然后单击“完成”
单击“OK”
重复2.1和2.2里面的步骤,创建虚拟机“rac2”。
2.3 创建共享磁盘
注意:创建共享磁盘只需在rac1主机上面操作,然后复制配置(见2.4)到rac2就可以。
点击“Edit virtual machinesettings”
点击“Add…”
单击“下一步”
选择“Hard Disk”,然后单击“下一步”
默认配置,然后单击“下一步”
默认配置,然后单击“下一步”
一定要勾选“Allocate all diskspace now”,避免后面安装rac虚拟机死机现象。并分配磁盘空间大小,然后单击“下一步”
点击“Browser”,选择前面创建的存放共享磁盘的目录,然后点击“Advanced”
选择“SCSI 1:0”,一定要都选“Independent”和选择“Persistent”,然后点击“完成”
就会创建磁盘,根据磁盘空间的大小来决定创建的速度。
根据下面的表来依次创建:
磁盘 | 大小 | 磁盘名 | 接口 |
/dev/sdb | 400M | sdb.vmdk | 1:0 |
/dev/sdc | 600M | sdc.vmdk | 1:1 |
/dev/sdd | 5G | sdd.vmdk | 1:2 |
/dev/sde | 5G | sde.vmdk | 1:3 |
/dev/sdf | 5G | sdf.vmdk | 1:4 |
/dev/sdg | 5G | sdg.vmdk | 1:5 |
创建完了可以看见如下视图:
点击“OK”
2.4 配置磁盘共享
用UE打开“D:\vm\rac_rhel5.4\rac1\ Red Hat Linux.vmx”添加如下内容:
scsi1:0.deviceType = "disk"
scsi1:1.deviceType = "disk"
scsi1:2.deviceType = "disk"
scsi1:3.deviceType = "disk"
scsi1:4.deviceType = "disk"
scsi1:5.deviceType = "disk"
disk.locking = "false"
diskLib.dataCacheMaxSize ="0"
diskLib.dataCacheMaxReadAheadSize ="0"
diskLib.DataCacheMinReadAheadSize ="0"
diskLib.dataCachePageSize ="4096"
diskLib.maxUnsyncedWrites = "0"
scsi1.sharedBus = "virtual"
然后要让“D:\vm\rac_rhel5.4\rac1\Red Hat Linux.vmx”和“D:\vm\rac_rhel5.4\rac2\Red Hat Linux.vmx”的配置文件中关于磁盘共享那段配置一样。重新打开虚拟机就可以在rac1和rac2中都看见如下视图:
2.5 安装Linux
注意:在rac1、rac2主机上面都要进行次操作。
点击“Edit virtual machinesettings”
选择“CD-ROM”,设置iso文件
点击“OK”保存
点击“Start this virtualmachine”,开始安装Linux
按“回车”
不检查安装介质,选择“Skip”,按“回车”
点击“Next”
点击“Next”
点击“Next”
不输入序列号,选择“Skip entering ….”按“OK”
继续按“Skip”
点击“Yes”,格式化磁盘,一直到sdg盘格式话完
选择“Create custom layout”,自定义分区,我的分区如下:
点击“Next”
配置IP、主机名、网关,依照前面(1.3)来配置,点击“Next”:
选择时区,点击“Next”
设置root用户密码,点击“Next”
点击“Next”
定制软件包“Customize now”,点击“Next”
总之就是尽量多安装rpm包,选择完了,点击“Next”
点击“Next”,开始安装Linux系统
2.6 设置Linux
注意:在rac1、rac2主机上面都要进行次操作。
安装完了,会显示如下视图:
重启系统,点击“Reboot”
点击“Forward”
点击“Forward”
选择“Disabled”,禁用防火墙,点击“Forward”
选择“Disabled”,点击“Forward”
点击“Forward”
点击“Forward”
选择不注册,点击“Forward”
点击“Forward”
点击“Forward”
点击“Forward”
点击“Finish”,重启系统
3. 安装前系统配置
注意:以下配置如果没有特别说明,都是以root用户在rac1、rac2都要操作。
3.1 编辑文件 /etc/redhat-release 修改为
Red Hat Enterprise Linux Server release 4(Tikanga)
3.2 编辑文件 /etc/hosts 加入
127.0.0.1 localhost
192.168.20.200 rac1
192.168.20.210 rac1-vip
10.10.10.100 rac1-priv
192.168.20.201 rac2
192.168.20.211 rac2-vip
10.10.10.101 rac2-priv
3.3 编辑文件 /etc/sysctl.conf 加入以下内容,然后 sysctl -p 使他生效
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144
3.4 编辑文件 /etc/security/limits.conf 加入以下内容
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
3.5 编辑文件 /etc/pam.d/login 加入以下内容
session required /lib/security/pam_limits.so
session required pam_limits.so
3.6 编辑文件 /etc/profile 加入以下内容
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
3.7 安装ASMlib需要的rpm包
下载地址:
http://www.oracle.com/technetwork/server-storage/linux/downloads/index.html
下载一定要对照着自己的Linux内核版本来,可以使用下面的命令查看内核:
[root@rac1 ~]# uname -r
2.6.18-164.el5
[root@rac1 ~]# uname -a
Linux rac1 2.6.18-164.el5 #1 SMP Tue Aug 1815:51:54 EDT 2009 i686 i686 i386 GNU/Linux
安装命令:
[root@rac1 asm redhat5.4]# rpm --import/etc/pki/rpm-gpg/RPM*
[root@rac1 asm redhat5.4]# rpm -Uvh *.rpm--force --nodeps
我安装的包如下:
3.8 安装database需要的rpm包
安装database需要的rpm包也是只能多不能少,我安装的rpm包如下:
3.9 配置hangcheck-timer模块
配置系统启动时自动加载模块,在/etc/rc.d/rc.local 中添加如下内容:
modprobehangcheck-timer
修改参数,在/etc/modprobe.conf 中添加如下内容:
optionshangcheck-timer hangcheck_tick=30 hangcheck_margin=180
确认模块加载成功:
grep Hangcheck/var/log/messages | tail -2
3.10 配置时间同步
注意:这里是因为虚拟机中时间走的非常不准确,才每分钟同步一次的。生产环境根据实际情况来,可以选择同步时间服务器或同步其中一个节点等,而且同步的方式也有很多种,常用的比如ntpdate、rdate等。
在rac1上启动time-stream 服务,并设置为自动动:
chkconfigtime-stream on
在rac2 上添加任务,每一分钟和rac1进行一次时间同步:
*/1* * * * rdate -s 192.168.20.200
3.11 创建Oracle用户组
注意:rac1和rac2和主机上面创建的oracle用户必须uid和gid一样。所以这里在创建的时候就指定了uid和gid。
[root@rac1 u01]# groupadd -g 500 dba
[root@rac1 u01]# groupadd -g 501 oinstall
[root@rac1 u01]# useradd -u 500 -g oinstall-G dba oracle
[root@rac1 u01]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionaryword
Retype new UNIX password:
passwd: all authentication tokens updatedsuccessfully.
[root@rac1 u01]# mkdir -p/u01/app/oracle/product/10.2.0/db_1
[root@rac1 u01]# mkdir -p/u01/app/oracle/product/10.2.0/crs_1
[root@rac1 u01]# chown -R oracle:oinstall/u01
[root@rac1 u01]# chmod -R 755 /u01
3.12 配置Oracle用户环境变量
注意:这里是贴出的rac1主机的配置,rac2主机的配置只需要把其中的ORACLE_SID改为rac2既可。
export ORACLE_SID=orcl1
export ORACLE_BASE=/u01/app/oracle
exportORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
exportORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
exportPATH=.:$PATH:$HOME/bin:$ORA_CRS_HOME/bin:$ORACLE_HOME/bin
umask 022
stty erase ^h
3.13 磁盘分区
注意:这步是在rac1上面操作,rac2需要重启,磁盘分区使用fdisk命令,注意不要格式化磁盘,格式化了就不是裸设备了。
分区后我的显示如下:
[root@rac1 10.2.0]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 144 1052257+ 82 Linux swap / Solaris
/dev/sda3 145 2610 19808145 83 Linux
Disk /dev/sdb: 429 MB, 429496320 bytes
64 heads, 32 sectors/track, 409 cylinders
Units = cylinders of 2048 * 512 = 1048576bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 192 196592 83 Linux
/dev/sdb2 193 409 222208 83 Linux
Disk /dev/sdc: 644 MB, 644244992 bytes
64 heads, 32 sectors/track, 614 cylinders
Units = cylinders of 2048 * 512 = 1048576bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 192 196592 83 Linux
/dev/sdc2 193 384 196608 83 Linux
/dev/sdc3 385 614 235520 83 Linux
Disk /dev/sdd: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 652 5237158+ 83 Linux
Disk /dev/sde: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 652 5237158+ 83 Linux
Disk /dev/sdf: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 652 5237158+ 83 Linux
Disk /dev/sdg: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280bytes
Device Boot Start End Blocks Id System
/dev/sdg1 1 652 5237158+ 83 Linux
3.14 配置裸设备
注意:这里是贴出的redhat5.4的配置方法,redhat4.X配置方法不一样,在网上应该很容易找到。以下配置就是为了让oracle用户对磁盘有读/写的权限。
编辑 /etc/sysconfig/rawdevices 添加如下内容:
/dev/raw/raw1/dev/sdb1
/dev/raw/raw2/dev/sdb2
/dev/raw/raw3/dev/sdc1
/dev/raw/raw4/dev/sdc2
/dev/raw/raw5/dev/sdc3
/dev/raw/raw6/dev/sdd1
/dev/raw/raw7/dev/sde1
/dev/raw/raw8/dev/sdf1
/dev/raw/raw9/dev/sdg1
修改/etc/udev/rules.d/60-raw.rules 文件添加如下内容:
ACTION=="add",KERNEL=="sdb1",RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add",KERNEL=="sdb2",RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add",KERNEL=="sdc1",RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add",KERNEL=="sdc2",RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add",KERNEL=="sdc3",RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add",KERNEL=="sdd1",RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add",KERNEL=="sde1",RUN+="/bin/raw /dev/raw/raw7 %N"
ACTION=="add",KERNEL=="sdf1",RUN+="/bin/raw /dev/raw/raw8 %N"
ACTION=="add",KERNEL=="sdg1",RUN+="/bin/raw /dev/raw/raw9 %N"
ACTION=="add",KERNEL=="raw[1-9]", OWNER="oracle",GROUP="oinstall", MODE="660"
重启系统和验证一下:
[root@rac2 ~]# ll /dev/raw/raw*
crw-rw---- 1 oracle oinstall 162, 1Aug 2 18:34 /dev/raw/raw1
crw-rw---- 1 oracle oinstall 162, 2Aug 2 18:34 /dev/raw/raw2
crw-rw---- 1 oracle oinstall 162, 3Aug 2 18:34 /dev/raw/raw3
crw-rw---- 1 oracle oinstall 162, 4Aug 2 18:34 /dev/raw/raw4
crw-rw---- 1 oracle oinstall 162, 5Aug 2 18:34 /dev/raw/raw5
crw-rw---- 1 oracle oinstall 162, 6Aug 2 18:34 /dev/raw/raw6
crw-rw---- 1 oracle oinstall 162, 7Aug 2 18:34 /dev/raw/raw7
crw-rw---- 1 oracle oinstall 162, 8Aug 2 18:34 /dev/raw/raw8
crw-rw---- 1 oracle oinstall 162, 9Aug 2 18:34 /dev/raw/raw9
3.15 创建asm磁盘组
/etc/init.d/oracleasmconfigure
/etc/init.d/oracleasmcreatedisk VOL1 /dev/sdd1
/etc/init.d/oracleasmcreatedisk VOL2 /dev/sde1
/etc/init.d/oracleasmcreatedisk VOL3 /dev/sdf1
/etc/init.d/oracleasmcreatedisk VOL4 /dev/sdg1
/etc/init.d/oracleasmscandisks
/etc/init.d/oracleasmlistdisks
上面命令VOL*那几步只需在rac1上面执行既可!前面安装的ASMlib包就是为了在这里会用到。
3.16 配置SSH等价
注意:这一步是以oracle用户来操作的。因为安装rac是在其中一个节点安装,然后oracle会自动复制到其它节点。最后的测试一定要不输入密码就能显示日期,不然安装一定失败!
首先在每个节点上面执行:
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -trsa
ssh-keygen -tdsa
然后再在rac1上面执行:
cat ~/.ssh/*.pub>> ~/.ssh/authorized_keys
ssh rac2 cat~/.ssh/*.pub >> ~/.ssh/authorized_keys
scp~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
最后测试:在每个节点上测试连接。验证当您再次运行以下命令时,系统是否不提示您输入口令。
ssh rac1 date
ssh rac1-privdate
ssh rac2 date
ssh rac2-privdate
这一步一定要测试成功才能进行后面的安装!
4. 安装Oracle Clusterware
注意:以下操作,没有特别说明都是以oracle用户操作的。
4.1 检查安装环境
[oracle@rac1 clusterware]$/u01/clusterware/cluvfy/runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node"rac1"
Destination Node Reachable?
------------------------------------ ------------------------
rac2 yes
rac1 yes
Result: Node reachability check passed fromnode "rac1".
Checking user equivalence...
Check: User equivalence for user"oracle"
Node Name Comment
------------------------------------ ------------------------
rac2 passed
rac1 passed
Result: User equivalence check passed foruser "oracle".
Checking administrative privileges...
Check: Existence of user "oracle"
Node Name User Exists Comment
------------ ------------------------ ------------------------
rac2 yes passed
rac1 yes passed
Result: User existence check passed for"oracle".
Check: Existence of group"oinstall"
Node Name Status Group ID
------------ ------------------------ ------------------------
rac2 exists 501
rac1 exists 501
Result: Group existence check passed for"oinstall".
Check: Membership of user"oracle" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 yes yes yes yes passed
rac1 yes yes yes yes passed
Result: Membership check for user"oracle" in group "oinstall" [as Primary] passed.
Administrative privileges check passed.
Checking node connectivity...
Interface information for node"rac2"
Interface Name IPAddress Subnet
------------------------------ ------------------------------ ----------------
eth0 192.168.20.201 192.168.20.0
eth1 10.10.10.101 10.10.10.0
Interface information for node"rac1"
Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
eth0 192.168.20.200 192.168.20.0
eth1 10.10.10.100 10.10.10.0
Check: Node connectivity of subnet"192.168.20.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac2:eth0 rac1:eth0 yes
Result: Node connectivity check passed forsubnet "192.168.20.0" with node(s) rac2,rac1.
Check: Node connectivity of subnet"10.10.10.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac2:eth1 rac1:eth1 yes
Result: Node connectivity check passed forsubnet "10.10.10.0" with node(s) rac2,rac1.
Suitable interfaces for the privateinterconnect on subnet "192.168.20.0":
rac2 eth0:192.168.20.201
rac1 eth0:192.168.20.200
Suitable interfaces for the privateinterconnect on subnet "10.10.10.0":
rac2 eth1:10.10.10.101
rac1 eth1:10.10.10.100
ERROR:
Could not find a suitable set of interfacesfor VIPs.
Result: Node connectivity check failed.
Checking system requirements for 'crs'...
Check: Total memory
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 503.26MB(515340KB) 512MB (524288KB) failed
rac1 503.26MB(515340KB) 512MB (524288KB) failed
Result: Total memory check failed.
Check: Free disk space in "/tmp"dir
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 13.33GB (13979824KB) 400MB (409600KB) passed
rac1 13.22GB(13860900KB) 400MB (409600KB) passed
Result: Free disk space check passed.
Check: Swap space
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 1GB (1052248KB) 1GB (1048576KB) passed
rac1 1GB (1052248KB) 1GB (1048576KB) passed
Result: Swap space check passed.
Check: System architecture
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 i686 i686 passed
rac1 i686 i686 passed
Result: System architecture check passed.
Check: Kernel version
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 2.6.18-164.el5 2.4.21-15EL passed
rac1 2.6.18-164.el5 2.4.21-15EL passed
Result: Kernel version check passed.
Check: Package existence for"make-3.79"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 make-3.81-3.el5 passed
rac1 make-3.81-3.el5 passed
Result: Package existence check passed for"make-3.79".
Check: Package existence for "binutils-2.14"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 binutils-2.17.50.0.6-12.el5 passed
rac1 binutils-2.17.50.0.6-12.el5 passed
Result: Package existence check passed for"binutils-2.14".
Check: Package existence for"gcc-3.2"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 gcc-4.1.2-46.el5 passed
rac1 gcc-4.1.2-46.el5 passed
Result: Package existence check passed for"gcc-3.2".
Check: Package existence for"glibc-2.3.2-95.27"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 glibc-2.5-42 passed
rac1 glibc-2.5-42 passed
Result: Package existence check passed for"glibc-2.3.2-95.27".
Check: Package existence for"compat-db-4.0.14-5"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 compat-db-4.2.52-5.1 passed
rac1 compat-db-4.2.52-5.1 passed
Result: Package existence check passed for"compat-db-4.0.14-5".
Check: Package existence for"compat-gcc-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 compat-gcc-7.3-2.96.128 passed
rac1 compat-gcc-7.3-2.96.128 passed
Result: Package existence check passed for"compat-gcc-7.3-2.96.128".
Check: Package existence for"compat-gcc-c++-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 compat-gcc-c++-7.3-2.96.128 passed
rac1 compat-gcc-c++-7.3-2.96.128 passed
Result: Package existence check passed for"compat-gcc-c++-7.3-2.96.128".
Check: Package existence for"compat-libstdc++-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 compat-libstdc++-7.3-2.96.128 passed
rac1 compat-libstdc++-7.3-2.96.128 passed
Result: Package existence check passed for"compat-libstdc++-7.3-2.96.128".
Check: Package existence for"compat-libstdc++-devel-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 compat-libstdc++-devel-7.3-2.96.128 passed
rac1 compat-libstdc++-devel-7.3-2.96.128 passed
Result: Package existence check passed for"compat-libstdc++-devel-7.3-2.96.128".
Check: Package existence for"openmotif-2.2.3"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 openmotif-2.3.1-2.el5 passed
rac1 openmotif-2.3.1-2.el5 passed
Result: Package existence check passed for"openmotif-2.2.3".
Check: Package existence for "setarch-1.3-1"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
rac2 setarch-2.0-1.1 passed
rac1 setarch-2.0-1.1 passed
Result: Package existence check passed for"setarch-1.3-1".
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 exists passed
rac1 exists passed
Result: Group existence check passed for"dba".
Check: Group existence for"oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 exists passed
rac1 exists passed
Result: Group existence check passed for"oinstall".
Check: User existence for"nobody"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 exists passed
rac1 exists passed
Result: User existence check passed for"nobody".
System requirement failed for 'crs'
Pre-check for cluster services setup wasunsuccessful on all the nodes.
其中有几项是可以不通过的:
因为是虚拟环境安装,我的内存大小不符合要求。
没有找vip网络,后面会通过运行vipca手动指定。
4.2 安装clusterware
注意:这里是在rac1上面安装的,所以我在配置时间同步的时候是让rac2来同步rac1,因为修改时间有可能造成系统重启。而且保守起见,rac1的时间最好比rac2快几秒。
安装需要图形界面,这里我是借组的Xmanager需要先运行:
exportDISPLAY=192.168.20.150:0.0;
安装就是运行 /u01/clusterware/runInstaller
点击“Next”
点击“Next”
选择crs安装的目录,点击“Next”
这里会有一个警告,就是内存不足,手动勾选上,点击“Next”
这里是在rac1上面安装的,只会检查到rac1的配置,点击“add”,手动添加rac2的配置
点击“Next”
点击“Edit”,编辑网卡配置如下:
点击“Next”
填写ocr存放设备,然后点击“Next”
填写voting disk存放路径,然后点击“Next”
点击“Install”开始安装
安装的一定时候会弹出一个对话框要求以root身份运行脚本
建议脚本运行顺序:
rac1 /u01/app/oracle/oraInventory/orainstRoot.sh
rac2 /u01/app/oracle/oraInventory/orainstRoot.sh
rac1 /u01/app/oracle/product/10.2.0/crs_1/root.sh
rac2 /u01/app/oracle/product/10.2.0/crs_1/root.sh
在运行脚本之需要修改两个文件,这是一个bug:
修改$CRS_HOME/bin/vipca文件,增加标记为红色的那一行:
esac
unset LD_ASSUME_KERNEL
ARGUMENTS=""
修改$CRS_HOME/bin/srvctl文件,增加标记为红色的那一行:
exportLD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL
# Run opscontrol utility
rac1运行第二个脚本的输出:
[root@rac1 bin]# /u01/app/oracle/product/10.2.0/crs_1/root.sh
WARNING: directory'/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory'/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is notowned by root
WARNING: directory '/u01/app' is not ownedby root
WARNING: directory '/u01' is not owned byroot
Checking to see if Oracle CRS stack isalready configured
/etc/oracle does not exist. Creating itnow.
Setting the permissions on OCR backupdirectory
Setting up NS directories
Oracle Cluster Registry configurationupgraded successfully
WARNING: directory'/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory'/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is notowned by root
WARNING: directory '/u01/app' is not ownedby root
WARNING: directory '/u01' is not owned byroot
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCRkeys.
Using ports: CSS=49895 CRS=49896 EVMC=49898and EVMR=49897.
node <nodenumber>: <nodename><private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within600 seconds.
CSS is active on these nodes.
rac1
CSS is inactive on these nodes.
rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRSdaemons.
rac2运行第二个脚本的输出:
[root@rac2 oracle]#/u01/app/oracle/product/10.2.0/crs_1/root.sh
WARNING: directory'/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory'/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is notowned by root
WARNING: directory '/u01/app' is not ownedby root
WARNING: directory '/u01' is not owned byroot
Checking to see if Oracle CRS stack isalready configured
Setting the permissions on OCR backupdirectory
Setting up NS directories
Oracle Cluster Registry configurationupgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0'is not owned by root
WARNING: directory'/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is notowned by root
WARNING: directory '/u01/app' is not ownedby root
WARNING: directory '/u01' is not owned byroot
clscfg: EXISTING configuration version 3detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCRkeys.
Using ports: CSS=49895 CRS=49896 EVMC=49898and EVMR=49897.
node <nodenumber>: <nodename><private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -forceparameter to override.
-force is destructive and will destroy anyprevious cluster
configuration.
Oracle Cluster Registry for cluster hasalready been initialized
Startup will be queued to init within 90seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within600 seconds.
CSS is active on these nodes.
rac1
rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD tostart
Waiting for the Oracle CRSD and EVMD tostart
Waiting for the Oracle CRSD and EVMD tostart
Waiting for the Oracle CRSD and EVMD tostart
Waiting for the Oracle CRSD and EVMD tostart
Waiting for the Oracle CRSD and EVMD tostart
Waiting for the Oracle CRSD and EVMD tostart
Waiting for the Oracle CRSD and EVMD tostart
Oracle CRS stack installed and runningunder init(1M)
Running vipca(silent) for configuringnodeapps
The given interface(s), "eth0" isnot public. Public interfaces should be used to configure virtual IPs.
rac2运行完毕,后面有个错误“eth0 is not public”,是正常错误,这个错误就是在运行vipca来修复的。
4.3 运行vipca
在rac2上以root用户运行vipca
[root@rac2 oracle]# cd/u01/app/oracle/product/10.2.0/crs_1/bin/
[root@rac2 bin]# exportDISPLAY=192.168.20.150:0.0;
[root@rac2 bin]# ./vipca
点击“Next”
点击“Next”
填写如上内容,点击“Next”
点击“Finish”
开始创建,等到100%的时候
点击“OK”
点击“Exit”就消失了
但还有一个窗口存在
运行完脚本之后点击“OK”,会进行一些检查,如果通过就能看见如下页面
点击“Exit”和“Yes”至此集群软件就安装完成了!
4.4 检查安装
在rac1和rac2上面执行下面操作,可以看见如下内容:
[oracle@rac1 clusterware]$ olsnodes
rac1
rac2
[oracle@rac1 clusterware]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@rac1 clusterware]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
一定要所有的都“ONLINE”,才表示安装成功,可以继续后面的操作!
4.5 常见错误
如果看见:No such file ordirectory就是因为在执行脚本前没有修改那两个文件。
如果遇见:Error 0(Native:listNetInterfaces:[3]) 解决办法:
[root@rac2 bin]# ./oifcfg iflist
eth0 192.168.20.0
eth1 10.10.10.0
[root@rac2 bin]# ./oifcfg setif -globaleth0/192.168.20.0:public
[root@rac2 bin]# ./oifcfg setif -globaleth1/10.10.10.0:cluster_interconnect
[root@rac2 bin]# ./oifcfg getif
eth0 192.168.20.0 global public
eth1 10.10.10.0 global cluster_interconnect
如果在安装过程弹出一个警告框,是因为rac1的时间比rac2走得快,可以忽略,不会影响后面的使用。
5. 安装Oracle Database
注意:安装oracle数据库的时候推荐只安装数据库软件,创建数据库后面使用dbca来创建。
[oracle@rac1 clusterware]$ exportDISPLAY=192.168.20.150:0.0;
[oracle@rac1 clusterware]$ cd/u01/database/
[oracle@rac1 database]$ ./runInstaller
点击“Next”
选择“Enterprise Edition”,然后点击“Next”
点击“Next”
点击“Select All”,然后点击“Next”
手动勾选内存不足那个警告,然后点击“Next”
选择只安装软件,不创建数据库,然后点击“Next”
点击“Install”开始安装数据库
到安装到一定时候会弹出对话框,要求以root用户执行脚本
建议执行顺序:
rac1 /u01/app/oracle/product/10.2.0/db_1/root.sh
rac2 /u01/app/oracle/product/10.2.0/db_1/root.sh
执行完毕后点击“OK”
点击“Exit”和“Yes”,至此数据库软件也安装成功了!
6. 配置Listener
6.1 安装
[oracle@rac1 database]$ exportDISPLAY=192.168.20.150:0.0;
[oracle@rac1 database]$ netca
选择集群配置,点击“Next”
选择所有节点都配置,点击“Next”
选择配置监听器,点击“Next”
选择增加一个监听器,点击“Next”
可以给监听器取名,一般都选择默认,这里我也不追求个性了,点击“Next”
一般只需要TCP既可,点击“Next”
可以给监听器指定端口好,这里也选择默认,点击“Next”
选择no,不配置;另一个监听器了,点击“Next”
点击“Next”
点击“Finish”,配置监听器非常简单,总之就是一路Next,全部默认既可。
6.2 检查
可以查看监听器安装成功没有:
[oracle@rac2 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
7. 创建ASM
[oracle@rac1 database]$ exportDISPLAY=192.168.20.150:0.0;
[oracle@rac1 database]$ dbca
点击“Next”
选择“Configure AutomaticStorage Management”,然后点击“Next”
点击“Select All”,然后点击“Next”
设置管理员密码(sys和system用户),这里asm实例的参数文件要选择pfile,然后点击“Next”会弹出对话框,点击“OK”。开始创建asm实例
点击“Create Now”创近磁盘组,然后点击“Finish”。
8. 创建Database
8.1 安装
[oracle@rac1 database]$ exportDISPLAY=192.168.20.150:0.0;
[oracle@rac1 database]$ dbca
点击“Next”
选择创建数据库,点击“Next”
选择给所遇节点都创建,选择“Select All”,然后点击“Next”
选择“Custom Database”,自定义数据库,然后点击Next
输入sid,必须和前面oracle用户配置的ORACLE_SID一样,然后点击“Next”
因为这里是实验环境,就不选择安装EM了,生产环境可以考虑,然后“Next”
EM就是以借组浏览器,通过图形界面来管理数据库。
输入管理员密码(sys和system),然后点击“Next”
选择“ASM”,然后点击“Next”
选择存放数据文件的磁盘,然后“Next”
点击“Next”
指定闪回区和启用归档模式,这个在生产环境是必须的,点击“Next”
因为实验环境,所有的组件都不安装了,点击“Next”
点击“Next”
这里可以设置分配个数据库的内存大小和数据库的字符集等信息。点击“Next”
注意数据库字符集只能在创建数据库的时候选择,一旦数据库创建好以后,数据库字符集不能修改,生成环境应该慎重考虑。
点击“Next”
点击“Finish”
点击“OK”,开始创建数据库
这个过程可能相对比较漫长,等待一段时间,安装结束
点击“Exit”,会启动rac1和rac2的实例
启动完毕所有的窗口就会自动关闭了。至此,恭喜,你已经成功的搭建的rac环境了!
8.2 检查
[oracle@rac1 database]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.orcl.db application ONLINE ONLINE rac2
ora....l1.inst application ONLINE ONLINE rac1
ora....l2.inst application ONLINE ONLINE rac2
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
[oracle@rac1 database]$ srvctl statusnodeapps -n rac1
VIP is running on node: rac1
GSD is running on node: rac1
Listener is running on node: rac1
ONS daemon is running on node: rac1
[oracle@rac1 database]$ srvctl statusnodeapps -n rac2
VIP is running on node: rac2
GSD is running on node: rac2
Listener is running on node: rac2
ONS daemon is running on node: rac2
[oracle@rac1 database]$ srvctl status asm-n rac1
ASM instance +ASM1 is running on node rac1.
[oracle@rac1 database]$ srvctl status asm-n rac2
ASM instance +ASM2 is running on node rac2.
8.3 停止rac
srvctl stop service -d <databasename> -s <service name>
srvctl stop database -d <databasename>
srvctl stop asm -n <node1 hostname>
srvctl stop asm -n <node2 hostname>
srvctl stop nodeapps -n <node1hostname>
srvctl stop nodeapps -n <node2 hostname>
crs_stat –t
实际操作命令:(只需在一个主机上面执行既可)
[oracle@rac1 database]$ srvctl stop service-d orcl
[oracle@rac1 database]$ srvctl stopdatabase -d orcl
[oracle@rac1 database]$ srvctl stop asm -nrac1
[oracle@rac1 database]$ srvctl stop asm -nrac2
[oracle@rac1 database]$ srvctl stopnodeapps -n rac1
[oracle@rac1 database]$ srvctl stopnodeapps -n rac2
[oracle@rac1 database]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.orcl.db application OFFLINE OFFLINE
ora....l1.inst application OFFLINE OFFLINE
ora....l2.inst application OFFLINE OFFLINE
ora....SM1.asm application OFFLINE OFFLINE
ora....C1.lsnr application OFFLINE OFFLINE
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application OFFLINE OFFLINE
ora.rac1.vip application OFFLINE OFFLINE
ora....SM2.asm application OFFLINE OFFLINE
ora....C2.lsnr application OFFLINE OFFLINE
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application OFFLINE OFFLINE
ora.rac2.vip application OFFLINE OFFLINE
8.4 启动rac
srvctl start nodeapps -n <node1hostname>
srvctl start nodeapps -n <node2hostname>
srvctl start asm -n <node1 hostname>
srvctl start asm -n <node2 hostname>
srvctl start database -d <databasename>
srvctl start service -d <databasename> -s <service name>
crs_stat –t
实际操作命令:(只需在一个主机上面执行既可)
[oracle@rac1 ~]$ srvctl start nodeapps -nrac2
[oracle@rac1 ~]$ srvctl start nodeapps -nrac1
[oracle@rac1 ~]$ srvctl start asm -n rac2
[oracle@rac1 ~]$ srvctl start asm -n rac1
[oracle@rac1 ~]$ srvctl start database -dorcl
[oracle@rac1 ~]$ srvctl start service -dorcl
[oracle@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.orcl.db application ONLINE ONLINE rac1
ora....l1.inst application ONLINE ONLINE rac1
ora....l2.inst application ONLINE ONLINE rac2
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
9. 体验Failover
9.1 客户机的hosts文件
Windows系统在 C:\Windows\System32\drivers\etc\hosts
Linux系统在 /etc/hosts
加入如下内容:
192.168.20.200 rac1
192.168.20.201 rac2
192.168.20.210 rac1-vip
192.168.20.211 rac2-vip
9.2 客户端的tnsname.ora
编辑客户端的tnsname.ora加入如下内容:
orcl =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
)
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
9.3 体验failover
开始运行打开一个cmd窗口用sqlplus连接到rac数据库,可以看见当前为我们服务的是orcl1实例,也就是rac1主机:
然后登录到rac1主机,用abort命令关闭数据库,模拟断电,主机突然垮掉:
最后,再一次在客户端那个连接中执行查询:(等待几秒)
可以看见当前为我们服务的orac2这个数据库了,也就是rac2主机。这就是rac透明故障切换,客户端是察觉不到后台数据库有一个节点已经死掉。
10. 使用JDBC连接RAC
测试代码:
public static void main(String[] args) throws Exception {
String clszz = "oracle.jdbc.driver.OracleDriver";
String url="jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1-vip)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=rac1-vip)(PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcl)))";
String username = "system";
String password = "oracle";
Class.forName(clszz);
Connection conn = DriverManager.getConnection(url,username, password);
System.out.println(conn);
}
输出:
oracle.jdbc.driver.T4CConnection@16930e2