vbox4.2.12+OEL6.4v32+raw+asm+双节点rac
笔记本上自己做实验用的,写的比较乱
一、vbox4.2.12安装
从edelivery.oracle.com下载了vbox4.2.12和OEL6.4v32 ISO包,vbox安装到笔记本的windows7 v32上,感觉vbox是个很轻巧的软件,内存占用少,安装完磁盘空间也不多,安装比较简单,多次“下一步”就完成了,这次先安装一个节点,全部完成后,再用addNode的方法加另外一个节点。
先单独建一个oracle linux虚拟机,虚拟机的名字:OEL6.4v32_Oracle11g,做为节点1,内存1g,cpu 1个,引导硬盘先暂时一个12g大小,另外一个共享盘在vbox上要用命令行方式,虚拟机建立完成后再加:
1.用vbox的vboxmanager工具先建一个10g大小的用于共享的磁盘
C:\"Program Files"\Oracle\VirtualBox\VBoxManage.exe createhd -filename D:\download\vmware\OEL6.4v32_Oracle11g\rac_disk01.vdi -size 10240 -format VDI -variant Fixed
2.把创建完的磁盘加到虚拟机上
C:\"Program Files"\Oracle\VirtualBox\VBoxManage.exe storageattach OEL6.4v32_Oracle11g --storagectl "SATA" --port 5 --device 0 --type hdd --medium D:\download\vmware\OEL6.4v32_Oracle11g\rac_disk01.vdi --mtype shareable
3.将新建的磁盘设置为共享
C:\"Program Files"\Oracle\VirtualBox\VBoxManage.exe modifyhd D:\download\vmware\OEL6.4v32_Oracle11g\rac_disk01.vdi --type shareable
4.建第二个节点时只要把这个磁盘加到虚拟机上就可以了,不用再设置共享了
C:\"Program Files"\Oracle\VirtualBox\VBoxManage.exe storageattach OEL6.4v32_Oracle11g02 --storagectl "SATA" --port 5 --device 0 --type hdd --medium D:\download\vmware\OEL6.4v32_Oracle11g\rac_disk01.vdi --mtype shareable
打算将节点1的所有系统都配置安装正常后,再建第二个虚拟机:OEL6.4v32_Oracle11g02
二、OEL6.4v32安装和配置
1.挂载上OEL6.4的ISO包,引导进入OEL安装介面,在选择安装group时,我选择了mini,后面的需要的包要用yum来安装,安装过程也比较顺利,安装完后自动弹出ISO,进入了linux的字符界面,将虚拟机引导顺序改一下,只让从硬盘引导,再挂载上OEL ISO包到/media,用于安装rpm包
2.用yum安装rpm包是比较方便的,yum安装过程如下:
a. 要先安装createrepo包,有依赖关系,要按下面顺序安装
rpm -Uvh deltarpm-*
rpm -Uvh python-deltarpm-*
rpm -Uvh createrepo-*
b.yum 安装配置
rpm -Uvh python-dateutil*
rpm -Uvh yum*
mkdir /root/Packages
复制所有rpm文件到本地盘
cp /media/Packages/* /root/Packages -R
cd /root/Packages
建立repos库
createrepo . ##后面有个点
建立repos group
createrepo -g /media/repodata/b721ade0e1f8de79b1bfeefe260f5fa17114e2f2470bffec06192769ea0abd5d-comps-rhel6-Server.xml . ##后面有个点
编辑yum配置文件
vi /etc/yum.repos.d/
[OEL64]
name=OEL64 $releasever - $basearch - Debug
baseurl=file:///root/Packages
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
清除yum cache
yum clean all
yum就配置好了
3.安装图形界面
安装Oracle GI 可以用silent方式,但是太麻烦了,就用图形界面吧,这里就要安装gnome
yum groupinstall Desktop
4.安装GI和RAC必需的rpm包
32bit linux
yum -y install binutils
yum -y install compat-libstdc++-33
yum -y install elfutils-libelf
yum -y install elfutils-libelf-devel
yum -y install gcc
yum -y install gcc-c++
yum -y install glibc
yum -y install glibc-common
yum -y install glibc-devel
yum -y install glibc-headers
yum -y install ksh
yum -y install libaio
yum -y install libaio-devel
yum -y install libgcc
yum -y install libstdc++-4.1.2
yum -y install libstdc++-devel
yum -y install make
yum -y install sysstat
yum -y install unixODBC
yum -y install unixODBC-devel
5.建立GI和RAC的用户和HOME
groupadd -g 1000 oinstall
groupadd -g 1200 asmadmin
groupadd -g 1201 asmdba
groupadd -g 1202 asmoper
groupadd -g 1300 dba
groupadd -g 1301 oper
useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid
useradd -m -u 1101 -g oinstall -G dba,oper,asmdba,asmadmin -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
mkdir -p /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
####################### grid .bash_profile
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
JAVA_HOME=$ORACLE_HOME/jdk; export JAVA_HOME
NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
PATH=.:$JAVA_HOME/bin:$PATH:$HOME/bin:$ORACLE_HOME/bin:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin:/u01/app/common/oracle/bin;export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib;export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib;export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
stty erase ^H
####################### oracle .bash_profile
ORACLE_SID=racdb1; export ORACLE_SID
ORACLE_UNQNAME=racdb; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME
ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
JAVA_HOME=$ORACLE_HOME/jdk; export JAVA_HOME
NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
PATH=.:$JAVA_HOME/bin:$PATH:$HOME/bin:$ORACLE_HOME/bin:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin:/u01/app/common/oracle/bin;export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib;export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib;export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
stty erase ^H
6.用fdisk给共享磁盘分区,用UDEV固定名称和权限
共享磁盘设备名称是sdb,用root用户执行:fdisk /dev/sdb
sdb一共1305个磁盘簇,平均分成4个主分区sdb1-4
将4个分区格式化为raw
raw /dev/raw/raw1 /dev/sdb1
raw /dev/raw/raw2 /dev/sdb2
raw /dev/raw/raw3 /dev/sdb3
raw /dev/raw/raw4 /dev/sdb4
用ls -l /dev/sdb* 看一下分区的major和minor(第五、六列)
brw-rw----. 1 root disk 8, 16 May 2 04:53 /dev/sdb
brw-rw----. 1 root disk 8, 17 May 2 04:53 /dev/sdb1
brw-rw----. 1 root disk 8, 18 May 2 04:53 /dev/sdb2
brw-rw----. 1 root disk 8, 19 May 2 04:53 /dev/sdb3
brw-rw----. 1 root disk 8, 20 May 2 04:53 /dev/sdb4
编辑UDEV配置文件 /etc/udev/rules.d/60-raw.rules ,加下面几行
ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="17", RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="18", RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="19", RUN+="/bin/raw /dev/raw/raw3 %M %m"
ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="20", RUN+="/bin/raw /dev/raw/raw4 %M %m"
ACTION=="add", KERNEL=="raw[1-4]",OWNER="grid",GROUP="asmadmin",MODE="660"
重新启动一下UDEV
start_udev
ls -l /dev/raw 看一下raw的权限
crw-rw----. 1 grid asmadmin 162, 5 May 2 21:42 raw1
crw-rw----. 1 grid asmadmin 162, 6 May 2 21:42 raw2
crw-rw----. 1 grid asmadmin 162, 7 May 2 21:41 raw3
crw-rw----. 1 grid asmadmin 162, 8 May 2 21:42 raw4
crw-rw----. 1 root disk 162, 0 May 2 04:53 rawctl
本来不打算用raw方法配置的,因为有一次用了raw后性能非常差,直接用分区反而比较快,所以对raw方法就不太想用了,但是这次用UDEV改变了分区权限后,用dd写这些分区后,这些分区的属主都被还原为root了,在建立cluster的OCR和voting磁盘组之前,它也是需要写分区来验证的,写完后分区属主被还原了,建CRS磁盘组时就会报错,不得已,只有用raw了,这次raw写还是比较快的,写了25m到raw1,速度可以达到196M/s
7.GI和RAC安装前的操作系统配置
A、/etc/hosts
##节点1
192.168.11.101 ora
192.168.11.102 ora-vip
192.168.88.100 ora-priv
##节点2
192.168.11.104 orb
192.168.11.105 orb-vip
192.168.88.101 orb-priv
##SCAN
192.168.11.103 ora-scan
B、grid和oracle两用户的limits.conf
echo 'oracle soft memlock 5242880' >>/etc/security/limits.conf
echo 'oracle hard memlock 524280' >>/etc/security/limits.conf
echo 'oracle soft nproc 2047' >>/etc/security/limits.conf
echo 'oracle hard nproc 16384' >>/etc/security/limits.conf
echo 'oracle soft nofile 65536' >>/etc/security/limits.conf
echo 'oracle hard nofile 65536' >>/etc/security/limits.conf
echo 'grid hard nofile 65536' >>/etc/security/limits.conf
echo 'grid hard nproc 16384' >>/etc/security/limits.conf
echo 'grid soft nproc 2047' >>/etc/security/limits.conf
C、系统参数配置
echo 'net.core.rmem_default=262144'>>/etc/sysctl.conf
echo 'net.core.wmem_default=262144'>>/etc/sysctl.conf
echo 'net.core.rmem_max = 4194304'>>/etc/sysctl.conf
echo 'net.core.wmem_max = 1048576'>>/etc/sysctl.conf
echo 'kernel.shmmni = 4096'>>/etc/sysctl.conf
echo 'kernel.sem = 250 32000 100 128'>>/etc/sysctl.conf
echo 'fs.file-max=6815744'>>/etc/sysctl.conf
echo 'net.ipv4.ip_local_port_range = 9000 65500'>>/etc/sysctl.conf
echo 'fs.aio-max-nr = 1048576'>>/etc/sysctl.conf
sysctl -p
D、改写nslookup
mv /usr/bin/nslookup /usr/bin/nslookup.original
vi /usr/bin/nslookup
#!/bin/bash
HOSTNAME=${1}
if [[ $HOSTNAME = "ora-scan" ]]; then
echo "Server: 24.154.1.34"
echo "Address: 24.154.1.34#53"
echo "Non-authoritative answer:"
echo "Name: ora-scan"
echo "Address: 192.168.11.103"
elif [[ $HOSTNAME = "ora" ]]; then
echo "Server: 24.154.1.34"
echo "Address: 24.154.1.34#53"
echo "Non-authoritative answer:"
echo "Name: ora"
echo "Address: 192.168.11.101"
elif [[ $HOSTNAME = "orb" ]]; then
echo "Server: 24.154.1.34"
echo "Address: 24.154.1.34#53"
echo "Non-authoritative answer:"
echo "Name: orb"
echo "Address: 192.168.11.104"
else
/usr/bin/nslookup.original $HOSTNAME
fi
chmod 755 /usr/bin/nslookup
E、因为是mini安装,所以没有ntp,如果安装了ntp client,用以下方式禁用它
mv /etc/ntp.conf /etc/ntp.conf.grid
F、在GI安装介质上找到cvuqdisk并安装它
rpm -Uvh ./grid/rpm/cvuqdisk-1.0.9-1.rpm
三、GI和RAC安装,只在ora这一个节点安装
GI安装我遇到下面几个问题
A、UDEV固定不住分区的属主,导致CRS磁盘组无法建立,最后只有用raw方法了
B、GI安装结束,执行root.sh时会发现一个lib找不到的错误:
Failed to create keys in the OLR, rc = 127, Message:
/u01/app/11.2.0/grid/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory
先看一下,有哪些libcap.so存在
ls -l /lib/libcap.so.*
lrwxrwxrwx. 1 root root 14 May 2 02:51 /lib/libcap.so.2 -> libcap.so.2.16
-rwxr-xr-x. 1 root root 12848 Oct 13 2011 /lib/libcap.so.2.16
再看一下它属于哪个rpm包
rpm -qf /lib/libcap.so.*
libcap-2.16-5.5.el6.i686
看来因为操作系统版本原因,在linux 5的版本里libcap是1的版本,到linux6上就是2的版本了,用链接方法解决
ln -s /lib/libcap.so.2.16 /lib/libcap.so.1
再次执行root.sh 就正常了
9、RAC和db的安装
先用grid用户执行asmca,建立磁盘组DATA,包含raw[2-4],然后再用database的runInstall和dbca建库,这里再没遇到啥太难解决的问题了
数据库名字我取orcl
四、给GI和RAC添加节点orb
1.按照原配置建立另外一个虚拟机 OEL6.4v32_Oracle11g02,主机名:orb 内存1g,CPU 1个,引导盘也是12g,安装过程与节点1一样,这里没有用克隆虚拟机的方法,是想再多感受一下OEL6.4v32的配置。第二个节点我不安装gnome图形了,一切手工
2.把节点1上建立的共享磁盘加到虚拟机节点2上就可以了,不用再设置共享了
C:\"Program Files"\Oracle\VirtualBox\VBoxManage.exe storageattach OEL6.4v32_Oracle11g02 --storagectl "SATA" --port 5 --device 0 --type hdd --medium D:\download\vmware\OEL6.4v32_Oracle11g\rac_disk01.vdi --mtype shareable
3.操作系统配置准备过程略
4.GI加节点过程遇到的问题比较多,下面用grid用户addNode.sh
A、cluvfy执行加节点预安装检查不能进行,我在1节点上安装GI和RAC时比较顺利,安装前就没做这个检查,而加节点程序GRID_HOME/oui/bin/addNode.sh是要做这个检查的
cluvfy stage -pre nodeadd -n orb -fixup -verbose
Performing pre-checks for node addition
Checking node reachability...
Check: Node reachability from node "ora"
Destination Node Reachable?
------------------------------------ ------------------------
orb yes
Result: Node reachability check passed from node "ora"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
orb passed
Result: User equivalence check passed for user "grid"
ERROR:
Reference data is not available for verifying prerequisites on this operating system distribution
Verification cannot proceed
Pre-check for node addition was unsuccessful on all the nodes.
redhat-release-6Server-1.noarch.rpm
filegroup6.jar
filegroup10.jar
cvu_prereq.xml
因为我节点1是安装好的,所以我没全部按照bug介绍的解决方法执行,我只安装了redhat-release-6Server-1.noarch.rpm
安装完后cluvfy stage -pre nodeadd -n orb -fixup -verbose 就正常完成了,虽然也有些fail,结果最后是unsuccessful的,但是已经是可以完成检查了
B、addNode.sh不成功,花了很长时间
GRID_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={orb}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={orb-vip}"
执行addNode.sh时也要预安装检查,有几个问题不好解决:内存1g,要求1.5g;文件系统要求7.5g,因为节点1已经安装好了,所以不足7.5g;pdksh-5.2.14在linux6上根本就没有;/etc/resolv.conf PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: ora,orb 这是老问题了,图形安装是可以忽略的
因为这几个问题的存在,导致addNode.sh不能进行,看了一下文件内容:
#!/bin/sh
OHOME=/u01/app/11.2.0/grid
INVPTRLOC=$OHOME/oraInst.loc
EXIT_CODE=0
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]
then
$ADDNODE
EXIT_CODE=$?;
else
CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"
$CHECK_NODEADD
EXIT_CODE=$?;
EXIT_CODE=0 ##这是我加的
if [ $EXIT_CODE -eq 0 ]
then
$ADDNODE
EXIT_CODE=$?;
fi
fi
exit $EXIT_CODE ;
看到这个文件的内容就明白了,检查是有退出码的,因为那几个不重要的问题没解决,所以退出码不为0,就不让继续了,所以在关键位置上加了一句:EXIT_CODE=0,再执行addNode.sh就顺利完成了(参数 IGNORE_PREADDNODE_CHECKS 是怎么用的,试了两次,可能是格式不对,就放弃了这个方法了)
C、GI加完节点要执行root.sh,又遇到libcap.so.1找不到这个问题了,解决方法是一样的
5、RAC加节点,用oracle用户
$ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={orb}"
结束后也有root.sh
过程没有遇到问题
6、在2节点上加实例
文档是介绍了两种办法,一种是dbca交互,一种是dbca -silent,我两种都不用,手工加
A、节点1上sqlplus sysdba登录
建undo表空间undotbs2
create undo tablespace undotbs2 datafile size 110100480;
加几个redolog 到thread 2
alter database add logfile thread 2 group 4 '+DATA' size 52428800;
alter database add logfile thread 2 group 5 '+DATA' size 52428800;
alter database add logfile thread 2 group 6 '+DATA' size 52428800;
alter database enable thread 2;
修改参数
alter system set UNDO_TABLESPACE=UNDOTBS2 sid='orcl2' scope=spfile;
alter system set instance_number=2 sid='orcl2' scope=spfile
alter system set thread=2 sid='orcl2' sid='orcl2' scope=spfile;
B、节点2上
建立口令文件在$ORACLE_HOME/dbs/orapworcl2
建立 $ORACLE_HOME/dbs/initorcl2.ora ,内容参考1节点上的initorcl1.ora,如下:
SPFILE='+DATA/orcl/spfileorcl.ora'
C、加实例、启动实例
srvctl add instance -d orcl -i orcl2 -n orb
srvctl start instance -d orcl -i orcl2
现在基本上两个节点都是可以用的,可能还有些细节没有注意到,慢慢配置吧
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/44413/viewspace-759652/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/44413/viewspace-759652/