CentOS4.6安装Oracle 10g RAC tips

花了一点时间在自己的环境上安装了一个Oracle 10g RAC环境
安装的步骤比较烦琐而且有些麻烦www.oracle.com/technology/global/cn/pub/articles/chan-ubl-vmware.html
自己也总结了一些tips以备忘:

准备磁盘(shared disk)
---------------------------------

安装完CentOS4.6之后还需要安装如下RPM包:
------------------------------------------------------------------
libaio-0.3.105-2.i386.rpm
openmotif21-2.1.30-11.RHEL4.6.i386.rpm
sysstat-5.0.5-14.rhel4.i386.rpm
oracleasm-2.6.9-67.EL-2.0.3-1.i686.rpm
oracleasmlib-2.0.2-1.i386.rpm
oracleasm-support-2.0.3-1.i386.rpm
ocfs2-2.6.9-42.EL-1.2.3-1.i686.rpm
ocfs2console-1.2.1-1.i386.rpm
ocfs2-tools-1.2.1-1.i386.rpm


/etc/hosts文件中的解析内容:
----------------------------------------
[oracle@rac1 ~]$ cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               rac1.cn.ibm.com rac1 localhost.localdomain localhost
192.168.0.77            rac1    rac1.cn.ibm.com
192.168.0.5             rac1-vip
10.10.10.31             rac1-priv

192.168.0.78            rac2    rac2.cn.ibm.com
192.168.0.6             rac2-vip
10.10.10.32             rac2-priv


/etc/modprobe.conf中添加设备
-----------------------------------------
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
并执行
modprobe -v hangcheck-timer命令


OCFS32和ASM磁盘的分区(Fdisk)
---------------------------------------------
fdisk /dev/sdb

映射其中3块磁盘的设备文件到ASM(sda是localdisk,sdb是ocfs32)
----------------------------------------------------------------------------------------
[oracle@rac1 ~]$ cat /etc/sysconfig/rawdevices
# This file and interface are deprecated.
# Applications needing raw device access should open regular
# block devices with O_DIRECT.
# raw device bindings
# format: 
#         
# example: /dev/raw/raw1 /dev/sda1
#          /dev/raw/raw2 8 5
/dev/raw/raw1 /dev/sdc1
/dev/raw/raw2 /dev/sdd1
/dev/raw/raw3 /dev/sde1

以root用户重新启动rawdevices服务
[root@rac1 ~]# service rawdevices restart
Assigning devices:
           /dev/raw/raw1  --&gt   /dev/sdc1
/dev/raw/raw1:  bound to major 8, minor 33
           /dev/raw/raw2  --&gt   /dev/sdd1
/dev/raw/raw2:  bound to major 8, minor 49
           /dev/raw/raw3  --&gt   /dev/sde1
/dev/raw/raw3:  bound to major 8, minor 65
done

授权Oracle用户访问/dev/raw的权限
[root@rac1 ~]# chown oracle:dba /dev/raw/raw[1-3]
[root@rac1 ~]# chmod 660 /dev/raw/raw[1-3]
[root@rac1 ~]# ls -l /dev/raw/raw*
crw-rw----  1 oracle dba 162, 1 Jan  3 16:59 /dev/raw/raw1
crw-rw----  1 oracle dba 162, 2 Jan  3 16:59 /dev/raw/raw2
crw-rw----  1 oracle dba 162, 3 Jan  3 16:59 /dev/raw/raw3

并建立软连接
[oracle@rac1 ~]# ln -sf /dev/raw/raw1 /u01/oradata/devdb/asmdisk1
[oracle@rac1 ~]# ln -sf /dev/raw/raw2 /u01/oradata/devdb/asmdisk2
[oracle@rac1 ~]# ln -sf /dev/raw/raw3 /u01/oradata/devdb/asmdisk3

修改/etc/udev/permissions.d/50-udev.permissions
--------------------------------------------------------------------
修改其中的内容.
# raw devices
ram*:root:disk:0660
#raw/*:root:disk:0660
raw/*:oracle:dba:0660

建立RAC节点之间的信任关系
-----------------------------------------
su - oracle
mkdir ~/.ssh
chmod 700  ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
测试两个节点之间的信任关系

在两个节点上都以root用户配置Oracle ASM管理
--------------------------------------------------------------------
[root@rac1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [  OK  ]
Loading module "oracleasm": [  OK  ]
Mounting ASMlib driver filesystem: [  OK  ]
Scanning system for ASM disks: [  OK  ]

在任何一个节点上面以root用户创建ASM磁盘
-------------------------------------------------------------
[root@rac1 ~]# /etc/init.d/oracleasm createdisk mydisk1 /dev/sdc1
Marking disk "/dev/sdc1" as an ASM disk: [  OK  ]
[root@rac1 ~]# /etc/init.d/oracleasm createdisk mydisk2 /dev/sdd1
Marking disk "/dev/sdd1" as an ASM disk: [  OK  ]
[root@rac1 ~]# /etc/init.d/oracleasm createdisk mydisk3 /dev/sde1
Marking disk "/dev/sde1" as an ASM disk: [  OK  ]
[root@rac1 ~]# /etc/init.d/oracleasm listdisks;
MYDISK1
MYDISK2
MYDISK3
[root@rac1 ~]# /etc/init.d/oracleasm scandisks;
Scanning system for ASM disks: [  OK  ]

配置Oracle集群文件系统(OCFS2)
---------------------------------------------
OCFS2是Oracle开发的一个通用集群文件系统,与Linux内核集成在一起,他允许所有节点在集群文件系统上同时共享文件,因而消除了管理原始设备的需求,我们需要在ocfs2上寄宿OCR和表决磁盘。
OCR - Oracle Cluster Registry(集群注册文件),记录每个节点的相关信息
Voting Disk - Establishes quorum (表决磁盘),仲裁机制用于仲裁多个节点向共享节点同时写的行为,这样做是为了避免发生冲突。

进入GUI以root用户来运行ocfs2console来生成OCFS2配置文件
在configure Nodes的时候出现了“Could not start cluster stack,this must be resolved before any ocfs2...."
发现原因是
1.没有disable SELINUX,to do /etc/selinux/config 将SELINUX=disable就可以,并重新启动计算机
2.用错了OCFS2的rpm包,因为ocfs2-2.6.9-42.EL-1.2.3-1.i686.rpm这个包和自己的内核参数并不相符
[root@rac1 ~]# uname -r
2.6.9-67.EL
重新下载并安装了对应当前系统内核的ocfs2包,问题得到解决:
ocfs2-2.6.9-67.EL-1.2.7-1.el4.i686.rpm
ocfs2console-1.2.7-1.el4.i386.rpm
ocfs2-tools-1.2.7-1.el4.i386.rpm

配置文件生成在:
/etc/ocfs2/cluster.conf
同时可以通过Propagate Configuration的功能将配置文件从rac1传播到rac2上面去

在两个节点上配置O2CB驱动程序
--------------------------------------------
这是一组集群服务,负责管理节点于集群文件系统之间的通信
NM:跟踪cluster.conf中所有的节点的节点管理器
HB:当节点加入或离开集群时向上/下发出通知的心跳服务
TCP:处理节点之间的通信
DLM:分布式的锁管理器,跟踪所有锁以及这些锁所有者的状态
CONFIGFS:在/config中挂载的用户空间驱动的配置文件系统
DLMFS:用户空间和内核空间的DLM接口

在两个节点rac1和rac2执行
[root@rac1 ~]# /etc/init.d/o2cb unload
Stopping O2CB cluster ocfs2: OK
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK

[root@rac1 ~]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

配置好两个节点后,在格式化和挂载系统之前,验证一下o2cb在两个节点上均联机。
[root@rac1 ~]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
  Heartbeat dead threshold: 61
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 2000
Checking O2CB heartbeat: Not active

[root@rac2 ~]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
  Heartbeat dead threshold: 61
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 2000
Checking O2CB heartbeat: Not active
此时还没有心跳,因为我们的文件系统没有被挂载

在一个节点上,格式化并挂载文件系统(两个节点都挂载到节点中的/ocfs目录)
----------------------------------------------------------------------------------------------------
通过ocfs2console来进行格式化
Tasks->Format

之后在rac1和rac2两个节点上分别挂载文件系统
[root@rac1 /]# ls -ld ocfs
drwxr-xr-x  2 root root 4096 Jan  3 15:42 ocfs
[root@rac1 /]# mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /ocfs
我们在看一下o2bc输出的心跳
[root@rac1 /]# /etc/init.d/o2bc status
-bash: /etc/init.d/o2bc: No such file or directory
[root@rac1 /]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
  Heartbeat dead threshold: 61
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 2000
Checking O2CB heartbeat: Active
已经是active状态

通过编辑两个节点的/etc/fstab,我们还可以在系统引导的时候挂载文件系统
/dev/sdb1       /ocfs       ocfs2       _netdev,datavolume,nointr       0 0

在OCR和voting disk将驻留的OCFS32系统中创建目录
-------------------------------------------------------------------------
我们假设rac1将驻留ocr和voting disk
[root@rac1 /]# mkdir /ocfs/clusterware
[root@rac1 /]# chown -R oracle:dba /ocfs


安装Oracle集群软件
----------------------------
在一个节点安装会自动分发给其他节点
to be continued......

 

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/12361284/viewspace-87899/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/12361284/viewspace-87899/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值