一步一步在virtual box4.1.6中安装基于rhel5.5x86_64的oracle 10g R2双节点RAC

  1. 配置单实例环境
参照: http://blog.csdn.net/t0nsha/article/details/7166582

2. 配置域名解析
vim /etc/hosts
127.0.0.1       localhost.localdomain localhost
192.168.2.101   rac1.localdomain        rac1
192.168.2.102   rac2.localdomain        rac2
192.168.0.101   rac1-priv.localdomain   rac1-priv
192.168.0.102   rac2-priv.localdomain   rac2-priv
192.168.2.111   rac1-vip.localdomain    rac1-vip
192.168.2.112   rac2-vip.localdomain    rac2-vip

3. 创建安装目录
mkdir -p /u01/oracle/crs
mkdir -p /u01/oracle/10gR2
chown -R oracle:oinstall /u01
chmod -R 775 /u01

4. 配置环境变量
vi .bash_profile
export ORACLE_SID=RAC1
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/crs
export PATH=$ORACLE_HOME/bin:$PATH

5. 创建共享磁盘
set path=C:\Program Files\Oracle\VirtualBox;%path%
VBoxManage createhd --filename ocr1.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename ocr2.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename vot1.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename vot2.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename vot3.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename asm1.vdi --size 5120 --format VDI --variant Fixed
VBoxManage createhd --filename asm2.vdi --size 5120 --format VDI --variant Fixed
VBoxManage createhd --filename asm3.vdi --size 5120 --format VDI --variant Fixed

6. 关联共享磁盘
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 1 --device 0 --type hdd --medium ocr1.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 2 --device 0 --type hdd --medium ocr2.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 3 --device 0 --type hdd --medium vot1.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 4 --device 0 --type hdd --medium vot2.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 5 --device 0 --type hdd --medium vot3.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 6 --device 0 --type hdd --medium asm1.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 7 --device 0 --type hdd --medium asm2.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 8 --device 0 --type hdd --medium asm3.vdi --mtype shareable
http://www.oracledistilled.com/virtualbox/creating-shared-drives-in-oracle-vm-virtualbox/

7. 配置共享磁盘
VBoxManage modifyhd ocr1.vdi --type shareable
VBoxManage modifyhd ocr2.vdi --type shareable
VBoxManage modifyhd vot1.vdi --type shareable
VBoxManage modifyhd vot2.vdi --type shareable
VBoxManage modifyhd vot3.vdi --type shareable
VBoxManage modifyhd asm1.vdi --type shareable
VBoxManage modifyhd asm2.vdi --type shareable
VBoxManage modifyhd asm3.vdi --type shareable

8. 克隆第二台虚拟机
mkdir rac2
VBoxManage clonehd rac1\rac1.vdi rac2\rac2.vdi

创建基于rac2.vdi的虚拟机.

VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 1 --device 0 --type hdd --medium ocr1.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 2 --device 0 --type hdd --medium ocr2.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 3 --device 0 --type hdd --medium vot1.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 4 --device 0 --type hdd --medium vot2.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 5 --device 0 --type hdd --medium vot3.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 6 --device 0 --type hdd --medium asm1.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 7 --device 0 --type hdd --medium asm2.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 8 --device 0 --type hdd --medium asm3.vdi --mtype shareable

9. 配置rac2的环境变量
vi .bash_profile
export ORACLE_SID=RAC2
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/crs
export PATH=$ORACLE_HOME/bin:$PATH

10. 测试域名解析
ping -c 3 rac1
ping -c 3 rac1-priv
ping -c 3 rac2
ping -c 3 rac2-priv

11. 配置ssh
ssh-keygen -t rsa
cat id_rsa.pub >>authorized_keys
scp authorized_keys rac2:/home/oracle/.ssh/
scp authorized_keys rac1:/home/oracle/.ssh/

rac1和rac2上执行以下4条命令,没有提示输入密码表示ssh配置成功:
ssh rac1 date
ssh rac1-priv date
ssh rac2 date
ssh rac2-priv date

12. 配置共享磁盘为裸设备
vim  /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sdb", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="sdg", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add", KERNEL=="sdh", RUN+="/bin/raw /dev/raw/raw7 %N"
ACTION=="add", KERNEL=="sdi", RUN+="/bin/raw /dev/raw/raw8 %N"
ACTION=="add", KERNEL=="raw[1-8]", OWNER="oracle", GROUP="oinstall", MODE="0660"

13. 验证集群软件
cd /clusterware/cluvfy/
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose

如下错误是bug导致,可以忽略
Could not find a suitable set of interfaces for VIPs.
(http://www.eygle.com/archives/2007/12/oracle10g_rac_linux_cluvfy.html)

14. 安装集群软件
cd /clusterware/
./runInstaller

运行/clusterware/runInstaller遇到以下问题时,
用root用户运行/clusterware/rootpre/rootpre.sh,
两个节点都需要执行:
Has 'rootpre.sh' been run by root? [y/n] (n)
# cd /clusterware/rootpre
./rootpre.sh

如遇系统版本不兼容,修改如下文件:
vim /etc/redhat-release
redhat-4

执行/u01/oracle/crs/root.sh时碰到:
Failed to upgrade Oracle Cluster Registry configuration
有两点需要解决:
1. 需要打一个patch(其实就是替换每个节点的clsfmt.bin): p4679769_10201_Linux-x86-64.zip
2. 作为ocr和voting的磁盘没有分区,用fdisk将sdb-sdi分区
(由于是共享磁盘,只需在一个节点rac1分区即可,到rac2上就可看到分区了),
然后用dd命令刷新ocr和voting裸设备即可:
dd if=/dev/zero of=/dev/raw/raw1 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw2 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw3 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw4 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw5 bs=1M count=256
* a.) Are the RAW devices you are using partitions or full disks? They have to be partitions.
(https://cn.forums.oracle.com/forums/thread.jspa?threadID=1122862&start=0&tstart=0)

再次执行root.sh:
[root@rac1 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
CSS is inactive on these nodes.
        rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

在rac2节点上运行root.sh碰到如下错误:
[root@rac2 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/oracle/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
需要修改两个脚本:
For the VIPCA utility, alter the $CRS_HOME/bin/vipca script on all nodes to remove LD_ASSUME_KERNEL. After the "if" statement in line 123, add an unset

command to ensure LD_ASSUME_KERNEL is not set as follows:

if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
       then
            LD_ASSUME_KERNEL=2.4.19
            export LD_ASSUME_KERNEL
       fi
            unset LD_ASSUME_KERNEL
With the newly inserted line, root.sh should be able to call VIPCA successfully.

For the SRVCTL utility, alter the $CRS_HOME/bin/srvctl scripts on all nodes by adding a line, unset LD_ASSUME_KERNEL, after line 174 as follows:

LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL

http://docs.oracle.com/cd/B19306_01/relnotes.102/b15666/toc.htm

只得重新运行一遍/u01/oracle/crs/root.sh:
先删除cssfatal:
rm -f /etc/oracle/scls_scr/rac1/oracle/cssfatal
再刷新ocr与voting disk:
dd if=/dev/zero of=/dev/raw/raw1 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw2 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw3 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw4 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw5 bs=1M count=256

重新执行/u01/oracle/crs/root.sh时,在rac2节点上又出问题了:
[root@rac1 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
CSS is inactive on these nodes.
        rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac1 crs]# pwd
/u01/oracle/crs
[root@rac1 crs]# ssh rac2
root@rac2's password:
Last login: Wed Jan  4 21:34:06 2012 from rac1.localdomain
[root@rac2 ~]# source /home/oracle/.bash_profile
[root@rac2 ~]# cd $ORACLE_HOME
[root@rac2 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]

直接运行vipca碰到:
[root@rac2 bin]# ./vipca
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]

解决方法:
[root@rac2 bin]# ./oifcfg iflist
eth0  192.168.2.0
eth1  192.168.0.0
[root@rac2 bin]# ./oifcfg setif -global eth0/192.168.2.0:public
[root@rac2 bin]# ./oifcfg setif -global eth1/192.168.0.0:cluster_interconnect
[root@rac2 bin]# ./oifcfg getif
eth0  192.168.2.0  global  public
eth1  192.168.0.0  global  cluster_interconnect
[root@rac2 bin]#
./vipca
(http://blog.chinaunix.net/space.php?uid=261392&do=blog&id=2138877)

clusterware安装好后,运行crs_stat -t即可查看集群服务的运行情况:
[root@rac2 bin]# ./crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora.rac1.gsd   application    ONLINE    ONLINE    rac1       
ora.rac1.ons   application    ONLINE    ONLINE    rac1       
ora.rac1.vip   application    ONLINE    ONLINE    rac1       
ora.rac2.gsd   application    ONLINE    ONLINE    rac2       
ora.rac2.ons   application    ONLINE    ONLINE    rac2       
ora.rac2.vip   application    ONLINE    ONLINE    rac2  

15. 安装asm数据库
回到rac1,开始安装asm数据库:
[oracle@rac1 database]$ ./runInstaller

指定asm安装路径:
OraASM10g_home
/01/oracle/10gR2/asm

安装完成后,查看asm状态:
[oracle@rac1 database]$ srvctl status asm -n rac1
ASM instance +ASM1 is running on node rac1.
[oracle@rac1 database]$ srvctl status asm -n rac2
ASM instance +ASM2 is running on node rac2.
[oracle@rac1 database]$

[oracle@rac1 database]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    rac1       
ora....C1.lsnr application    ONLINE    ONLINE    rac1       
ora.rac1.gsd   application    ONLINE    ONLINE    rac1       
ora.rac1.ons   application    ONLINE    ONLINE    rac1       
ora.rac1.vip   application    ONLINE    ONLINE    rac1       
ora....SM2.asm application    ONLINE    ONLINE    rac2       
ora....C2.lsnr application    ONLINE    ONLINE    rac2       
ora.rac2.gsd   application    ONLINE    ONLINE    rac2       
ora.rac2.ons   application    ONLINE    ONLINE    rac2       
ora.rac2.vip   application    ONLINE    ONLINE    rac2       
[oracle@rac1 database]$

16. 安装主数据库
回到rac1,开始安装主数据库:
[oracle@rac1 database]$ ./runInstaller

指定主库安装路径:
OraDb10g_home
/01/oracle/10gR2/db_1


17. 更新.bash_profile:
vi .bash_profile
export ORACLE_SID=RAC
export ORACLE_BASE=/opt/oracle/10gR2
export ORACLE_HOME=/opt/oracle/10gR2/db_1
export PATH=$PATH:$ORACLE_HOME/bin


18. 配置管理rac1的环境变量脚本
crs.env
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/crs
export PATH=$ORACLE_HOME/bin:$PATH

asm.env
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/10gR2/asm
export PATH=$ORACLE_HOME/bin:$PATH

db.env
export ORACLE_SID=RAC1
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/10gR2/db_1
export PATH=$ORACLE_HOME/bin:$PATH

 

REF:
Oracle Database 11g Release 2 RAC On Linux Using VirtualBox
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: 在StarUML 4.1.6设置文,可以按照以下步骤进行操作: 1. 打开StarUML 4.1.6软件,进入“Options”(选项)菜单。 2. 在“Options”(选项)菜单,选择“General”(常规)选项卡。 3. 在“General”(常规)选项卡,找到“Language”(语言)选项,将其设置为“Chinese”(文)。 4. 点击“OK”按钮保存设置,重新启动StarUML 4.1.6软件,即可使用文界面。 希望以上步骤能够帮助您成功设置StarUML 4.1.6的文界面。 ### 回答2: 首先,我们需要确认你所使用的StarUML 4.1.6是否支持文,如果支持文,那么我们可以按照以下步骤来设置文。 1.启动StarUML 4.1.6程序,并进入软件界面。 2.选择“工具”选项,从下拉菜单选择“选项”。 3.在“选项”窗口,选择“环境”选项卡,并在“语言”选择“文”。 4.重新启动StarUML,如果设置成功,则软件界面就会显示为文。 如果以上步骤无法让你的StarUML 4.1.6显示文,那么可能需要你将软件的语言包更改为文。具体操作为: 1.前往StarUML的官方网站或者其他可靠网站,搜索并下载文语言包。 2.解压语言包,并将解压后得到的文件复制到StarUML的安装目录的“locale”文件夹。 3.重新启动StarUML软件,如果设置成功,则软件界面就会显示为文。 总的来说,如果你想让StarUML 4.1.6显示为文,可以尝试以上两种方法,其第一种方法要比第二种方法简便且更为官方,建议您先尝试第一种方法设置。 ### 回答3: StarUML 4.1.6是一款流行的UML建模工具,它支持多种语言编写,并且可以通过设置文来方便用户对模型的理解与操作。要想在StarUML 4.1.6进行文设置,需要按照以下步骤进行。 第一步安装文语言包 首先需要安装文语言包,可以在StarUML官网上下载,选择适合的版本下载,下载完成后进行安装,然后再打开StarUML。 第二步:修改界面显示 打开StarUML后,可以看到默认是英文界面,需要进行修改。点击菜单“View”,选择“Locale”,然后选择“国大陆”,即可使用文界面。 第三步:设置字体 在菜单“Preferences”选择“Appearance”,可以修改字体、字号等选项。在“Fonts”选项找到“Label Font”(标签字体),点击旁边的设置按钮,选择适合的文字体,然后选择适合的字体大小即可。 第四步:修改属性名称 在创建模型时,属性、操作等默认为英文,需要进行修改。在菜单“Tools”选择“Extensions Manager”,然后在搜索框输入“labels”,找到“Custom Labels”,安装该扩展。安装完后,就可以在模型为属性、操作等设置文名称了。 以上就是在StarUML 4.1.6设置文的步骤,如果要进行深入开发,还需要了解更多相关的知识与技巧。使用文界面可以方便用户的理解与操作,提高建模的效率。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值