如何在vmware上面模拟搭建双节点rac


环境说明
操作系统rh3 ,vmware gsx 2.5,oracle 9204
建议使用rh2.1进行安装测试,过程会比较顺利,且磁盘io性能较好


目前已经有多种再同一个物理机器上实现rac得安装方法,
如同一个node启动两个instance阿,但是都不够真实模拟实际的
rac环境,毕竟只有一台物理主机,实现不了两个node同时mount一个disk array的真实效果。

即使某些公司有钱(如电信,移动,我们也有哦),有disk array(san阿nas阿),也不太可能有钱到提供一个单独的disk array没事
给自己瞎折腾,生产系统你总归不敢乱动得了,
起码我是不敢再在我们的san环境中折腾得,当然你胆子大不再这里的讨论范畴。

所以本文得解决方案,给大家提供一个廉价的rac解决方案
使更多的"穷人"有机会接触一下rac环境,而且可以有机会尝试不同的rac配置。

由于本文得阅读对象主要是有一定rac+linux相关基础的得读者,所以有些步骤能省就省了
而且是事后整理的,所以可能有些步骤不太清晰,但是关键步骤都作了说明
如果有问题,欢迎共同讨论


同时本文说明了ocfs得相关安装配置过程,rac+linux+ocfs是oracle推荐的标准运行环境,
且linux下的raw device和disk partion 有很多限制,如raw device 数目不能超过255,
每个磁盘的partion 不能超过15个阿,对于大的应用,启用ocfs很有必要。

1.安装好两个vmware 虚拟机vmware gsx 2.5+ redhat el 3.0

2.添加一个plain disk (第二个node通过执行linux下面的vmware -G 然后选择对应的scsi设备配置实现)
注意最好两个node得plain disk采用同一scsi编号如scsi 1:0


这一步是实现两个虚拟机共享磁盘,实现真正得rac效果的关键


需要修改Liux.pln文件,指定dat路径为绝对,否则第两个node没法找到。


修改需要参与rac的vmware的.vmx文件
在最后加上一个
scsi1.sharedBus = "virtual"
disk.locking = "false"

形成如下
node1

scsi1.present = TRUE
uuid.bios = "56 4d d9 6f ac 6b 6a a7-09 fc 94 d1 19 4b 67 c3"

scsi1:0.present = "true"
scsi1:0.deviceType = "plainDisk"
scsi1:0.fileName = "Linux.pln"
scsi1.sharedBus = "virtual"
disk.locking = "false"

node2

scsi1:0.present = TRUE
scsi1:0.deviceType = "plainDisk"
scsi1:0.fileName = "/home/vmware/Linux/Linux.pln"
scsi1.sharedBus = "virtual"
disk.locking = "false"


3.分区plain disk(主节点操作,对刚才共享的plain disk分区)

需要先执行
mklabel bsd
否则会出现Error: Unable to open /dev/sdb - unrecognised disk label.

在那台机器分区,就在那台机器做文件系统

mkpart primary 0 8000

mkfs.ext3 /dev/sdc1

mount /dev/sdc1 /tpdata/rac


4.修改/etc/hosts和/etc/sysconfig/network,使hostname返回完整的域名 (两个node操作)
如下
cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=vmware2.tplife.com
GATEWAY=10.1.1.254

[root@vmware1 diskarray]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
10.1.1.70 vmware1.tplife.com vmware1
10.1.1.71 vmware2.tplife.com vmware2
10.1.1.72 vmware3.tplife.com vmware3
10.1.1.73 vmware4.tplife.com vmware4


5.配置两个node的kernel 参数 (两个node操作)
/etc/sysctl.conf

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=262144
net.core.rmem_max=262144
net.core.wmem_default=262144
net.core.wmem_max=262144


/etc/init.d/network restart

6.安装rsh-server package (两个node操作)

rpm -ivh rsh-server-0.17-17.i386.rpm
rpm -ivh rsh-0.17-17.i386.rpm


修改/etc/xinetd.d/rlogin
/etc/xinetd.d/rsh
打开这两项服务

重起xinted


/etc/init.d/xinetd restart

vi /etc/hosts.equiv
如下

vmware1
vmware2



7.建立用户和组(两个node操作)

groupadd dba -g 501

useradd -G dba -u 500 rac


8.测试rcp是否工作
所有过程都要求不出现密码提示

touch rcp.test
vmware1
rcp rcp.test vmware2:/tpsys
这个应该出现在vmware2得/tpsys下面
vmware2
touch rcp2.test
rcp rcp2.test vmware1:/tpsys
这个应该出现在vmware1得/tpsys下面


8.解开cpio(需要执行安装程序的node)

cpio -idvm < 所有cpio文件



10.设置用户环境变量(两个node)
PATH=$PATH:$HOME/bin

export PATH
unset USERNAME
export ORACLE_BASE=/tpsys/9.2.0/rac
export ORACLE_HOME=$ORACLE_BASE/app
export ORACLE_SID=rac
export PATH=$ORACLE_HOME/bin:$PATH
export LIBPATH=$ORACLE_HOME/lib
export ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data;
export NLS_LANG=american_america.AL32UTF8
export DISPLAY=10.2.33.41:0


11.分区


建立如下分区(在主节点vmware1操作)

SYSTEM 400 Mb /dev/sdd14
TEMP 100 MB /dev/sdd13
UNDOTBS 100M*2 /dev/sdd11 /dev/sdd12
CONTROL01 50M /dev/sdd10
CONTROL02 50M /dev/sdd9
ONLINELOG 50M*4 /dev/sdd5 - /dev/sdd8
SPFILE 5M /dev/sdd3
quorum 20M * /dev/sdd1
srvcfg 100m /dev/sdd2



(两个node执行) 绑定raw device
raw /dev/raw/raw1 /dev/sdd1
raw /dev/raw/raw2 /dev/sdd2
raw /dev/raw/raw3 /dev/sdd3
raw /dev/raw/raw4 /dev/sdd4
raw /dev/raw/raw5 /dev/sdd5
raw /dev/raw/raw6 /dev/sdd6
raw /dev/raw/raw7 /dev/sdd7
raw /dev/raw/raw8 /dev/sdd8
raw /dev/raw/raw9 /dev/sdd9
raw /dev/raw/raw10 /dev/sdd10
raw /dev/raw/raw11 /dev/sdd11
raw /dev/raw/raw12 /dev/sdd12
raw /dev/raw/raw13 /dev/sdd13
raw /dev/raw/raw14 /dev/sdd14

(两个node执行)修改raw device 属主
chown rac /dev/raw/raw1
chown rac /dev/raw/raw2
chown rac /dev/raw/raw3
chown rac /dev/raw/raw4
chown rac /dev/raw/raw5
chown rac /dev/raw/raw6
chown rac /dev/raw/raw7
chown rac /dev/raw/raw8
chown rac /dev/raw/raw9
chown rac /dev/raw/raw10
chown rac /dev/raw/raw11
chown rac /dev/raw/raw12
chown rac /dev/raw/raw13
chown rac /dev/raw/raw14
(两个node执行)修改raw device 属性

chmod 600 /dev/raw/raw1
chmod 600 /dev/raw/raw2
chmod 600 /dev/raw/raw3
chmod 600 /dev/raw/raw4
chmod 600 /dev/raw/raw5
chmod 600 /dev/raw/raw6
chmod 600 /dev/raw/raw7
chmod 600 /dev/raw/raw8
chmod 600 /dev/raw/raw9
chmod 600 /dev/raw/raw10
chmod 600 /dev/raw/raw11
chmod 600 /dev/raw/raw12
chmod 600 /dev/raw/raw13
chmod 600 /dev/raw/raw14



修改/etc/sysconfig/rawdevices(两个node,使系统重新启动的时候可以自动配置好raw device)
加上

/dev/raw/raw1 /dev/sdd1
/dev/raw/raw2 /dev/sdd2
/dev/raw/raw3 /dev/sdd3
/dev/raw/raw4 /dev/sdd4
/dev/raw/raw5 /dev/sdd5
/dev/raw/raw6 /dev/sdd6
/dev/raw/raw7 /dev/sdd7
/dev/raw/raw8 /dev/sdd8
/dev/raw/raw9 /dev/sdd9
/dev/raw/raw10 /dev/sdd10
/dev/raw/raw11 /dev/sdd11
/dev/raw/raw12 /dev/sdd12
/dev/raw/raw13 /dev/sdd13
/dev/raw/raw14 /dev/sdd14

11.开始安装 (包括相关系统文件和补丁文件)




rpm -ivh compat-gcc-7.3-2.96.122.i386.rpm (光盘三)
rpm -ivh compat-libstdc++-7.3-2.96.122.i386.rpm (光盘三)
rpm -ivh compat-libstdc++-devel-7.3-2.96.122.i386.rpm (光盘三)

rpm -ivh compat-gcc-c++-7.3-2.96.122.i386.rpm (光盘三)
rpm -ivh compat-db-4.0.14-5.i386.rpm (光盘三)

rpm -ivh openmotif21-2.1.30-8.i386.rpm (光盘三)

mv /usr/bin/gcc /usr/bin/gcc323
mv /usr/bin/g++ /usr/bin/g++323
ln -s /usr/bin/gcc296 /usr/bin/gcc
ln -s /usr/bin/g++296 /usr/bin/g++

解开p3006854_9204_LINUX.zip
cd 3006854

su - root
sh rhel3_pre_install.sh

su - rac

export LD_ASSUME_KERNEL=2.4.19



具体安装就不详细讲了,参考标准文档

安装cluster manager 选择好node,则安装程序会自动安装到第二个node




12.配置cluster




以下步骤两个node执行


载入hangcheck-timer 模块(rh3已经自带了)
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 (rh3自己带的mode)
/sbin/lsmod



修改$ORACLE_HOME/oracm/admin/cmcfg.ora

如下
HeartBeat=15000
ClusterName=Oracle Cluster Manager, version 9i
PollInterval=1000
MissCount=210
PrivateNodeNames=vmware1 vmware2
PublicNodeNames=vmware1 vmware2
ServicePort=9998
#WatchdogSafetyMargin=5000
#WatchdogTimerMargin=60000
CmDiskFile=/dev/raw/raw2
HostName=vmware1
KernelModuleName=hangcheck-timer

Modify the $ORACLE_HOME/oracm/bin/ocmstart.sh and comment out the following lines:

# watchdogd's default log file
# WATCHDOGD_LOG_FILE=$ORACLE_HOME/oracm/log/wdd.log

# watchdogd's default backup file
# WATCHDOGD_BAK_FILE=$ORACLE_HOME/oracm/log/wdd.log.bak

# Get arguments
# watchdogd_args=`grep '^watchdogd' $OCMARGS_FILE |
# sed -e 's+^watchdogd *++'`

# Check watchdogd's existance
# if watchdogd status | grep 'Watchdog daemon active' >/dev/null
# then
# echo 'ocmstart.sh: Error: watchdogd is already running'
# exit 1
# fi

# Backup the old watchdogd log
# if test -r $WATCHDOGD_LOG_FILE
# then
# mv $WATCHDOGD_LOG_FILE $WATCHDOGD_BAK_FILE
# fi

# Startup watchdogd
# echo watchdogd $watchdogd_args
# watchdogd $watchdogd_args

Comment out the watchdogd line in the $ORACLE_HOME/oracm/admin/ocmargs.ora so that your file looks like this:

# Sample configuration file $ORACLE_HOME/oracm/admin/ocmargs.ora
# watchdogd
oracm
norestart 1800

Make sure all of these changes have been made to all RAC nodes. More information on ORACM parameters can be found in the following note:


su root
node2 :
mkdir $ORACLE_HOME/oracm/log
touch $ORACLE_HOME/oracm/log/cm.out

$ORACLE_HOME/oracm/bin/ocmstart.sh

ps -ef |grep oracm

确保所有node都有该进程





13.安装database 9201,然后patch到9204

在执行安装程序的node(vmware1)启动oracm







在第二个node,执行

mkdir -p $ORACLE_HOME/rdbms/audit
mkdir -p $ORACLE_HOME/rdbms/log
mkdir -p $ORACLE_HOME/network/log
mkdir -p $ORACLE_HOME/Apache/Apache/logs
mkdir -p $ORACLE_HOME/Apache/Jserv/logs




打patch到9204

需要先升级oui到2218,按标准文档安装


cd $ORACLE_BASE/oui/bin/linux
ln -s libclntsh.so.9.0 libclntsh.so


安装完9204的patch后,安装3119415补丁(该补丁是在rh3上安装oracle92需要的)

具体步骤,请参考oracle相关文档





14 建库 (第一个node vmware1)
创建好init文件如下


*.background_dump_dest='/tpdata/rac/9.2.0/admin/rac/bdump'
*.cluster_database=true
*.cluster_database_instances=4
*.compatible='9.2.0'
*.control_files='/dev/raw/raw9'
*.core_dump_dest='/tpdata/rac/9.2.0/admin/rac/cdump'
*.db_block_size=8192
*.db_cache_size=52428800
*.db_name='rac'
*.fast_start_mttr_target=300
*.java_pool_size=10485760
*.job_queue_processes=5
*.large_pool_size=1048576
*.log_archive_dest_1='LOCATION=/tpdata/rac/9.2.0/oradata/rac/archive'
*.log_archive_format='%t_%s.dbf'
*.log_archive_start=true
*.log_buffer=5242880
*.open_cursors=300
*.processes=250
*.remote_login_passwordfile='EXCLUSIVE'
*.shared_pool_size=52428800
*.sort_area_size=5242880
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.user_dump_dest='/tpdata/9.2.0/admin/rac/udump'
rac01.undo_tablespace='UNDOTBS'
rac02.undo_tablespace='UNDOTBS2'
rac01.instance_name='rac01'
rac02.instance_name='rac02'
rac01.instance_number=1
rac02.instance_number=2
rac02.local_listener='LISTENER_RAC02'
rac01.local_listener='LISTENER_RAC01'
rac02.remote_listener='LISTENER_RAC01'
rac01.remote_listener='LISTENER_RAC02'
rac01.remote_login_passwordfile='exclusive'
rac02.remote_login_passwordfile='exclusive'
rac01.thread=1
rac02.thread=2



mkdir -p /tpdata/rac/9.2.0/admin/rac/bdump
mkdir -p /tpdata/rac/9.2.0/admin/rac/cdump
mkdir -p /tpdata/rac/9.2.0/admin/rac/udump
mkdir -p /tpdata/rac/9.2.0/oradata/rac/archive



orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_sid password=ftp123



vi $ORACLE_HOME/network/admin/tnsnames.ora
如下
rac01 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.1.70)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = rac)
(instance_name=rac01)
)
)
rac02=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.1.71)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = rac)
(instance_name=rac02)
)
)
rac =
(DESCRIPTION =
(ADDRESS_LIST =
(load_balance=on)
(failover=on)
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.1.70)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.1.71)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = rac)
)
)
listeners_rac=
(address=(protocol=tcp)(host=10.1.1.70)(port=1521))
(address=(protocol=tcp)(host=10.1.1.71)(port=1521))
LISTENER_RAC01 =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.1.70)(PORT = 1521))
LISTENER_RAC02 =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.1.71)(PORT = 1521))

vi $ORACLE_HOME/network/admin/listener.ora
如下

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.1.70)(PORT = 1521))
)
)
)



sqlplus /nolog
conn / as sysdba;
startup nomount;

create database rac
maxinstances 10
maxlogfiles 20
maxlogmembers 3
maxdatafiles 100
noarchivelog
character set AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
datafile '/dev/raw/raw14' size 350m
undo tablespace UNDOTBS datafile '/dev/raw/raw11' size 95m
default temporary tablespace temp tempfile '/dev/raw/raw13' size 95m
logfile
'/dev/raw/raw5' size 10m,
'/dev/raw/raw6' size 10m
/

spool cat.log
@?/rdbms/admin/catalog.sql;
@?/rdbms/admin/catproc.sql;
@?/rdbms/admin/catclust.sql; --Create all cluster database specific views
@?/rdbms/admin/catblock.sql; --create views of oracle locks
@?/rdbms/admin/catexp7.sql;
@?/rdbms/admin/catoctk.sql; --Contains scripts needed to use the PL/SQL Cryptographic Toolkit Interface

alter user system identified by system; --sqlplus help
conn system/system;
@?/sqlplus/admin/pupbld.sql
@?/sqlplus/admin/help/hlpbld.sql helpus.sql;
spool off

create undo tablespace UNDOTBS2 datafile '/dev/raw/raw12' size 95m;
alter database add logfile thread 2 '/dev/raw/raw7' size 10m;
alter database add logfile thread 2 '/dev/raw/raw8' size 10m;

alter database enable thread 2;











15.配置第二个node(vmware2)


把第一个node安装好的程序copy到第二个node

修改lister.ora文件
建立password文件

orapwd file=$ORACLE_HOME/dbs/orapwrac02 password=ftp123
修改ORACLE_SID变量为rac02

cp $ORACLE_HOME/dbs/initrac01.ora 为 $ORACLE_HOME/dbs/initrac02.ora

修改$ORACLE_HOME/oracm/admin/cmcfg.ora为正确的node信息
如下
HeartBeat=15000
ClusterName=Oracle Cluster Manager, version 9i
PollInterval=1000
MissCount=210
PrivateNodeNames=vmware1 vmware2
PublicNodeNames=vmware1 vmware2
ServicePort=9998
#WatchdogSafetyMargin=5000
#WatchdogTimerMargin=60000
CmDiskFile=/dev/raw/raw1
HostName=vmware2
KernelModuleName=hangcheck-timer

启动数据库


startup





安装完成,可以自己进行一些taf测试。




配合rac,安装ocfs

1.获得rpm包
oracle网站提供下载

2.安装rpm(所有node)
[root@vmware1 root]# rpm -ivh ocfs*
Preparing... ########################################### [100%]
1:ocfs-support ########################################### [ 33%]
2:ocfs-2.4.21-EL ########################################### [ 67%]
Linking OCFS module into the module path [ OK ]
3:ocfs-tools ########################################### [100%]

检查是否自动启动
[root@vmware1 root]# chkconfig --list |grep ocfs
ocfs 0:off 1:off 2:on 3:on 4:on 5:on 6:off

3.配置(所有node)
vi /etc/ocfs.conf
# Ensure this file exists in /etc directory #
node_name =vmware1 (node的名字)
ip_address = 10.1.1.70(node的ip)
ip_port = 7000
comm_voting = 1

获得gid
ocfs_uid_gen -c
/etc/ocfs.conf 被修改,为下
# Ensure this file exists in /etc directory #
node_name =vmware1
ip_address = 10.1.1.70
ip_port = 7000
comm_voting = 1
guid = 01EE4B2C21C61836A214005056400067

4.format该分区 (一个node做即可,vmware1做)


通过parted获得一个分区(>=200M)
然后通过mkfs.ocfs格式划该分区

mkfs.ocfs -b 128 -C -F -g dba -u rac -L tplifeocfs -m /tpdata/rac/9.2.0/oradata/rac -p 755 /dev/sdd15

安装oracle的说法,128K得block size比较合适。


5.mount分区(所有node)
注意,vmware2需要重起,才能正确识别处上一步在vmware1所作的分区信息

/etc/init.d/ocfs start

mount -t ocfs /dev/sdd15 /tpdata/rac/9.2.0/oradata/rac



修改/etc/fstab文件,加上自动mount
(Note: _netdev instructs mount to exclude these volumes on first pass mount i.e. only mount after all network services are started.)
/dev/sdd15 /tpdata/rac/9.2.0/oradata/rac ocfs _netdev 0 0



然后就可以如同正常的文件系统一样在上面创建oracle得数据文件了


6.注意事项
如果更换网卡需要重新运行ocfs_uid_gen -c




如何在rac环境下修改为归档模式
1.停止所有node

2.修改init文件*.cluster_database=false
3.在一个node做修改

startup mount;
alter database archivelog ;
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /tpdata/rac/9.2.0/oradata/rac/archive
Oldest online log sequence 31
Next log sequence to archive 32
Current log sequence 32


alter database open;
shutdown immediate;

4.还原
*.cluster_database=true
5.启动所有node






来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/3618/viewspace-485519/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/3618/viewspace-485519/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值