Install Oracle 11gR2 RAC on HP-UX&AIX&RHEL
 
在3种OS上安装11gR2 RAC,希望对你有帮助。
 
1. 系统环境
硬件:
HP Rx2600、HP Rx3600,San Switch 各1台,EVA4400存储一套;
      AIX P570-1、AIXP570-2,San Swirch 各 1台,DS8300存储一套
      PC server1、PC server2,San Switch 各 1台,EMC NS-480一套
软件:
 hpia64_11gR2_grid.zip、hpia64_11gR2_database.zip
      Aix6L64_11gR2_grid.zip、aix6L64_11gR2_database.zip
      linux.x64_11gR2_grid.zip 、linux.x64_11gR2_database.zip
安装规划:
 

节点
节点名称
实例名称
数据库名称
处理器
RAM
操作系统
node1
rac1
RAC
2*1.900 GHz
4GB
HP-UX/AIX/RHEL
node2
rac2
2 *1.900 GHz
4GB
HP-UX/AIX/RHEL
网络配置
节点
名称
专用
IP 地址
公共
IP 地址
虚拟 IP 地址
SCAN 名称
SCAN IP 地址
node1
1.1.1.1
11.1.1.1
11.1.1.11
r-cluster-scan
11.1.1.21
node2
1.1.1.2
11.1.1.2
11.1.1.12
Oracle 软件组件
软件组件
操作系统用户
主组
辅助组
主目录
Oracle 基目录 /Oracle 主目录
grid Infrast
grid
oinstall
asmadmin、asmdba、asmoper
/home/grid
/u01/app/crs_base
/u01/app/crs_home
oracle RAC
oracle
oinstall
dba、oper、asmdba
/home/oracle
/u01/app/oracle
/u01/app/oracle/product/11.2.0/db_1
存储组件
存储组件
文件系统
卷大小
ASM 卷组名
ASM 冗余
设备名
OCR/
VF
ASM
300G
CRSDG1
External
oraocrs1~3
数据 /
恢复区
ASM
300G
DATADG1
External
oradata4`6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2. 安装前的准备
2.1.       软件及补丁列表
2.1.1 HP-UX软件及补丁列表
补丁列表如下:
PHCO_40381 11.31 Disk Owner Patch
PHKL_38038 vm cumulative patch
PHKL_38938 11.31 SCSI cumulative I/O patch
PHKL_39351 Scheduler patch : post wait hang
PHSS_36354 11.31 assembler patch
PHSS_37042 11.31 hppac (packed decimal)
PHSS_37959 Libcl patch for alternate stack issue fix
(QXCR1000818011)
PHSS_39094 11.31 linker + fdp cumulative patch
PHSS_39100 11.31 Math Library Cumulative Patch
PHSS_39102 11.31 Integrity Unwind Library
PHSS_38141 11.31 aC++ Runtime
补丁下载地址:
HP provides patch bundles at
Individual patches can be downloaded from
安装补丁命令:
#swinstall –s $PATH/ patchname
 
2.1.2 AIX软件及补丁列表
AIX 6.1 /5.3 required packages:
bos.adt.base
bos.adt.lib
bos.adt.libm
bos.perf.libperfstat 5.3.9.0 or later (AIX 5.3)
bos.perf.libperfstat 6.1.2.1 or later (AIX 6.1)
bos.perf.perfstat
bos.perf.proctools
rsct.basic.rte
rsct.compat.clients.rte
xlC.aix50.rte:10.1.0.0 or later (AIX 5.3)
xlC.aix61.rte:10.1.0.0 or later (AIX 6.1)
gpfs.base 3.2.1.8 or later (Only for RAC)
Authorized Problem Analysis Reports (APARs) for AIX 5L:
IZ42940
IZ49516
IZ52331
 
2.1.3 RHEL 软件及补丁列表
RHEL5,OEL5:
Refer to Note 880989.1
binutils-2.17.50.0.6-6.el5 (x86_64)
compat-libstdc++-33-3.2.3-61 (x86_64) << both ARCH's are required. See next line.
compat-libstdc++-33-3.2.3-61 (i386) << both ARCH's are required. See previous line.
elfutils-libelf-0.125-3.el5 (x86_64)
glibc-2.5-24 (x86_64) << both ARCH's are required. See next line.
glibc-2.5-24 (i686) << both ARCH's are required. See previous line.
glibc-common-2.5-24 (x86_64)
ksh-20060214-1.7 (x86_64)
libaio-0.3.106-3.2 (x86_64) << both ARCH's are required. See next line.
libaio-0.3.106-3.2 (i386) << both ARCH's are required. See previous line.
libgcc-4.1.2-42.el5 (i386) << both ARCH's are required. See next line.
libgcc-4.1.2-42.el5 (x86_64) << both ARCH's are required. See previous line.
libstdc++-4.1.2-42.el5 (x86_64) << both ARCH's are required. See next line.
libstdc++-4.1.2-42.el5 (i386) << both ARCH's are required. See previous line.
make-3.81-3.el5 (x86_64)
The remaining Install Guide requirements will have to be installed:
elfutils-libelf-devel-0.125-3.el5.x86_64.rpm
a.) requires elfutils-libelf-devel-static-0.125-3.el5.x86_64.rpm as a prerequisite, as listed below.
b.) elfutils-libelf-devel and elfutils-libelf-devel-static each depend upon the other. Therefore, they must be installed together, in one (1) "rpm -ivh" command as follows:
rpm -ivh elfutils-libelf-devel-0.125-3.el5.x86_64.rpm elfutils-libelf-devel-static-0.125-3.el5.x86_64.rpm
glibc-headers-2.5-24.x86_64.rpm
a.) requires kernel-headers-2.6.18-92.el5.x86_64.rpm as a prerequisite, as listed below
glibc-devel-2.5-24.x86_64.rpm << both ARCH's are required. See next item.
glibc-devel-2.5-24.i386.rpm << both ARCH's are required. See previous item.
gcc-4.1.2-42.el5.x86_64.rpm
a.) requires libgomp-4.1.2-42.el5.x86_64.rpm as a prerequisite, as listed below
libstdc++-devel-4.1.2-42.el5.x86_64.rpm
gcc-c++-4.1.2-42.el5.x86_64.rpm
libaio-devel-0.3.106-3.2.x86_64.rpm << both ARCH's are required. See next item
libaio-devel-0.3.106-3.2.i386.rpm << both ARCH's are required. See previous item.
sysstat-7.0.2-1.el5.x86_64.rpm
unixODBC-2.2.11-7.1.x86_64.rpm << both ARCH's are required. See next item
unixODBC-2.2.11-7.1.i386.rpm << both ARCH's are required. See previous item.
unixODBC-devel-2.2.11-7.1.x86_64.rpm << both ARCH's are required. See next item
unixODBC-devel-2.2.11-7.1.i386.rpm << both ARCH's are required. See previous item.
RHEL4,OEL4:
Refer to Note 880942.1
SLES10:-
Refer to Note 884435.1
SLES11 :-
Refer to Note 881044.1
2.2.       内核设置
2.2.1 HP-UX 内核参数设置如下:
   1.内核参数设置如下
NPROC 4096
KSI_ALLOC_MAX (NPROC*8)
EXECUTABLE_STACK=0
MAX_THREAD_PROC 1024
MAXDSIZ 1073741824
MAXDSIZ_64BIT 2147483648
MAXTSIZE_64BIT 1073741824
MAXSSIZ 134217728 bytes
MAXSSIZ_64BIT 1073741824
MAXUPRC ((NPROC*9)/10)+1
MSGMAP (MSGTQL+2) *
MSGMNI (NPROC)
MSGSEG 32767 *
MSGTQL (NPROC)
NCSIZE (NINODE+1024)
NFILE (15*NPROC+2048) *
NFLOCKS (NPROC)
NINODE (8*NPROC+2048)
NKTHREAD (((NPROC*7)/4)+16)
SEMMNI (NPROC)
SEMMNS (SEMMNI*2)
SEMMNU (NPROC - 4)
SEMVMX 32767
SHMMAX AvailMem
SHMMNI 4096
SHMSEG 512
VPS_CEILING 64
内核设置命令:
#kctune  PARAMETRE=value
2.在各节点上设置链接:
# cd /usr/lib
# ln -s libX11.3 libX11.sl
# ln -s libXIE.2 libXIE.sl
# ln -s libXext.3 libXext.sl
# ln -s libXhp11.3 libXhp11.sl
# ln -s libXi.3 libXi.sl
# ln -s libXm.4 libXm.sl
# ln -s libXp.2 libXp.sl
# ln -s libXt.3 libXt.sl
# ln -s libXtst.2 libXtst.sl
 
2.2.2 AIX内核参数设置如下:
set AIXTHREAD_SCOPE=S in the environment:(Part Number E10839-04)
export AIXTHREAD_SCOPE=S
1.       修改Oracle用户的系统限制参数
root user editing the /etc/security/limits
#vi /etc/security/limits.conf
default:
        fsize = -1
        core = 2097151
        cpu = -1
        data = -1
        rss = -1
        stack = -1
        nofiles = -1
 
   2.修改系统配置参数
# ioo –o aio_maxreqs           
aio_maxreqs = 65536   
# lsattr -E -l sys0 -a maxuproc
# smit chgsys
maxuproc 16384 Maximum number of PROCESSES allowed per user True
 
3.虚拟内存参数设置:
vmo -p -o minperm%=3
vmo -p -o maxperm%=90
vmo -p -o maxclient%=90
vmo -p -o lru_file_repage=0
vmo -p -o strict_maxclient=1
vmo -p -o strict_maxperm=0
 
4.配置网络参数
Add the following line to the /etc/rc.net
if [ -f /usr/sbin/no ] ; then
   /usr/sbin/no -o udp_sendspace=65536
   /usr/sbin/no -o udp_recvspace=655360
   /usr/sbin/no -o tcp_sendspace=65536
   /usr/sbin/no -o tcp_recvspace=65536
   /usr/sbin/no -o rfc1323=1
   /usr/sbin/no -o sb_max=2*655360
   /usr/sbin/no -r -o ipqmaxlen=512
fi
 
ipqmaxlen parameter:设置后需重启机器使其生效;
/usr/sbin/no -r -o ipqmaxlen=512
 
2.2.3 RHEL内核参数设置如下:
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 512 x processes (for example 6815744 for 13312 processes)
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
kernel.shmall = physical RAM size / pagesize For most systems, this will be the value 2097152. See Note: 301830.1 for more information.
kernel.shmmax = 1/2 of physical RAM, but not greater than 4GB. This would be the value 2147483648 for a system with 4Gb of physical RAM.
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
 
1.修改Oracle用户的系统限制参数
#vi /etc/security/limits.conf
grid                       soft     nproc    2047
grid                       hard     nproc    16384
grid                       soft     nofile 1024
grid                       hard     nofile 65536
oracle                     soft     nproc    2047
oracle                     hard     nproc    16384
oracle                     soft     nofile 1024
oracle                     hard     nofile 65536
   2.修改系统配置参数
        #vi /etc/sysctl.conf
        fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 8388608
kernel.shmmax = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
        执行更新
sysctl –p
 
2.3.       配置NTP
2.3.1 HP-UX配置NTP
1.配置NTP服务器端
 #vi /etc/ntp.conf
server    127.127.1.1
fudge             127.127.1.1 stratum 10
#vi /etc/rc.config.d/netdaemons
NTPDATE_SERVER=
XNTPD=1
XNTPD_ARGS="-x"
# /sbin/init.d/xntpd    start
2.配置NTP客户机
#vi /etc/ntp.conf
server 128.1.1.1                         # 假设 128.1.1.1是NTP服务器的 IP地址
driftfile    /etc/ntp.drift
 
# vi /etc/rc.config.d/netdaemons
NTPDATE_SERVER=128.1.1.1         # 假设128.1.1.1 I是 NTP服务器的IP地址
XNTPD=1
XNTPD_ARGS="-x"
# /sbin/init.d/xntpd    start
3.确定ntp是否工作, 通过运行ntpq -p命令检查确认你的客户机适当的关联形式。
   #/usr/bin/ntpq -p
  remote            refid          st t when poll reach   delay   offset    disp
===================================================================
*rx2600           LOCAL(1)      4 u 37 64 377   0.14    7.495    0.18
 
2.3.2 AIX配置NTP
1.11gR2 RAC自带CTSS时间同步服务,因此安装文档中要求禁用NTP,但是在安装过程中最后检查的时候,仍然会报NTP服务无法使用,可以直接忽略。
# stopsrc -s xntpd
安装完成后使用Grid用户执行,启动时间同步服务
   $ crsctl stat resource ora.ctssd -t -init
 
2.NTP配置
   修改slewing 选项
 #Vi  /etc/rc.tcpip
   start /usr/sbin/xntpd "$src_running" "-x"  
 
3.在两个节点启动守护进程
 # startsrc -s xntpd -a "-x"
 
2.3.3 RHEL配置NTP
1.服务器端配置
# vi /etc/ntp.conf 增加一行
restrict 10.20.28.0 mask 255.255.255.0 nomodify notrap
 
注意 /etc/hosts的文件应如下:
127.0.0.1            localhost    loopback
 
2.客户端配置:
ntpdate 10.20.28.39
Crontab -e
0-59/10 * * * * /usr/bin/ntpdate 10.20.28.39
 
3.修改slewing 选项
vi etc/sysconfig/ntpd 文件,添加 -x 标志,如下例所示:
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
 
4.在两个节点启动守护进程
Chkconfig ntpd on
 
2.4.       DNS配置
由于节点数一般较少采用SCAN没有实际意义,采用一个小的trick欺骗cluvfy工具,保证在验证的时候正常通过;
#[/]mv /usr/bin/nslookup /usr/bin/nslookup.org
#[/]cat /usr/bin/nslookup
#!/usr/bin/sh
HOSTNAME=${1}
if [[ $HOSTNAME = "r-cluster-scan" ]]; then
    echo "Server:         24.154.1.34"
    echo "Address:        24.154.1.34#53"
    echo "Non-authoritative answer:"
    echo "Name:   r-cluster-scan"
    echo "Address: 11.1.1.21" #假设11.1.1.21 为SCAN地址
else
    /usr/bin/nslookup.org $HOSTNAME
fi
 
注意:if you need to modify your SQLNET.ORA, ensure that EZCONNECT is in the list if you specify the order of the naming methods used for client name resolution lookups (11g Release 2 default is NAMES.DIRECTORY_PATH=(tnsnames, ldap, ezconnect)).
 
2.5.       创建用户目录及环境变量设置
2.5.1 HP-UX创建用户目录及环境变量设置
 
1.创建用户及相应的目录
#/usr/sbin/groupadd -g 501 oinstall
#/usr/sbin/groupadd -g 502 dba
#/usr/sbin/groupadd -g 503 oper
/#usr/sbin/groupadd -g 504 asmadmin
#/usr/sbin/groupadd -g 505 asmoper
#/usr/sbin/groupadd -g 506 asmdba
#/usr/sbin/useradd -g oinstall -G dba,asmdba,oper oracle
/#usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
#mkdir –p /u01/app/crs_base
#mkdir –p /u01/app/crs_home
#mkdir –p /u01/app/oracle/product/11.2.0/db_1
#chown –R oracle:oinstall /u01/app/oracle
#chown –R grid:ointall /u01/app/crs*
# chown grid:asmadmin /dev/rdsk/cxtydz
# chmod 660 /dev/rdsk/cxtydz
# chown grid:asmadmin /dev/rdisk/cxtydz
# chmod 660 /dev/rdisk/cxtydz
 
注:Before installation, OCR files must be owned by the user performing the installation (grid or oracle). That installation user must have oinstall as its primary group. During installation, OUI changes ownership of the OCR files to root.
 
To protect the OCR from logical disk failure, create another ASM diskgroup after installation and add the OCR to the second diskgroup using the ocrconfig command.
 
2.配置GRID用户环境变量
#su – grid
for Grid User:    grid user's HOME dir can't be the BASE DIR subdir;
#more .profile (Grid用户环境变量)
export PS1="`/usr/bin/hostname`-> "
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/crs_base
export ORACLE_HOME=/u01/app/crs_home
export PATH=$ORACLE_HOME/bin:$PATH:/usr/local/bin/:.
/usr/local/bin/bash
#
if [ -t 0 ]; then
   stty intr ^C
fi
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
        umask 022
fi
3.配置Oracle用户环境变量;
#su – oracle
#more .profile (oracle 环境变量)
export PS1="`/usr/bin/hostname`-> "
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORA_GRID_HOME=/u01/app/crs_home/
export ORACLE_OWNER=oracle
export ORACLE_SID=dbrac1
export PATH=$PATH:$ORACLE_HOME/bin:$ORA_GRID_HOME/bin:/sbin:/usr/sbin:/bin:/usr/local/bin:.
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export NLS_LANG=american_america.ZHS16GBK
export ORACLE_PATH=/home/oracle
/usr/local/bin/bash
if [ -t 0 ]; then
stty intr ^C
fi
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
        umask 022
fi
 
注:
#For the Bourne, Bash, or Korn shell, add lines similar to the following to the /etc/profile file:
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
        umask 022
fi
 
if [ -t 0 ]; then
   stty intr ^C
fi
 
#For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file:
 
if ( $USER == "oracle" || $USER == "grid" ) then
        limit maxproc 16384
        limit descriptors 65536
endif
 
test -t 0
if ($status == 0) then
   stty intr ^C
endif
 
2.5.2 AIX创建用户目录及环境变量设置
1.创建用户及目录
mkgroup -'A' id='601' adms='root' oinstall
mkgroup -'A' id='602' adms='root' dba
mkgroup -'A' id='603' adms='root' oper
mkgroup -'A' id='604' adms='root' asmadmin
mkgroup -'A' id='605' adms='root' asmdba
mkgroup -'A' id='606' adms='root' asmoper
 
mkuser id='601' pgrp='oinstall' groups='asmadmin,asmdba,asmoper,dba,oper'  home='/home/grid' grid
mkuser id='602' pgrp='oinstall' groups='dba,oper,asmdba' home='/home/oracle' oracle
/usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
mkdir -p /u01/app/crs_base
mkdir -p /u01/app/crs_home
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R grid:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
 
Passwd oracle
Passwd grid
/usr/bin/lsuser -a capabilities grid
 
2.修改默认的Grid 用户的环境变量:
#vi /home/grid/.profile
umask 0022
export PS1="`/usr/bin/hostname`-> "
export ORACLE_HOSTNAME=`hostname`
export JAVA_HOME=/usr/local/java
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/crs_base
export ORACLE_HOME=/u01/app/crs_home
export PATH=$ORACLE_HOME/bin:$JAVA_HOME/bin:$PATH:/usr/local/bin/:.
export AIXTHREAD_SCOPE=S
#/usr/local/bin/bash
if [ -t 0 ]; then
   stty intr ^C
fi
 
3.修改默认的Oracle用户的环境变量
    #vi /home/oracle/.profile
export PS1="`/bin/hostname`-> "
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_OWNER=oracle
export ORACLE_SID=tiqs21
export PATH=$PATH:$ORACLE_HOME/bin:$ORA_GRID_HOME/bin:/sbin:/usr/sbin:/bin:/usr/local/bin:.
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export NLS_LANG=american_america.ZHS16GBK
export ORACLE_PATH=/home/oracle
export AIXTHREAD_SCOPE=S
#/usr/local/bin/bash
if [ -t 0 ]; then
stty intr ^C
fi
umask 022
#For the Bourne, Bash, or Korn shell, add lines similar to the following to the /etc/profile  
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
        umask 022
fi
 
2.5.3 RHEL创建用户目录及环境变量设置
1.创建用户及目录
/usr/sbin/groupadd -g 501 oinstall
/usr/sbin/groupadd -g 502 dba
/usr/sbin/groupadd -g 503 oper
/usr/sbin/groupadd -g 504 asmadmin
/usr/sbin/groupadd -g 505 asmoper
/usr/sbin/groupadd -g 506 asmdba
/usr/sbin/useradd -g oinstall -G dba,asmdba,oper oracle
/usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
mkdir -p /u01/app/crs_base
mkdir -p /u01/app/crs_home
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R root:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chown -R grid:oinstall /u01/app/crs*
chmod -R 775 /u01
chmod -R 755 /u01/app/crs*
# chown grid:asmadmin /dev/dm*
# chmod 660 /dev/dm*
Passwd oracle<<EOF
oracle
oracle
EOF
Passwd grid<<EOF
oracle
oracle
EOF
 
2.修改默认的Grid 用户的环境变量
#Vi /home/grid/.bash_profile
umask 0022
export PS1="`/bin/hostname`-> "
#export ORACLE_HOSTNAME=10.20.28.37
export JAVA_HOME=/usr/local/java
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/crs_base
export ORACLE_HOME=/u01/app/crs_home
export PATH=$ORACLE_HOME/bin:$JAVA_HOME/bin:$PATH:/usr/local/bin/:.
#/usr/local/bin/bash
if [ -t 0 ]; then
   stty intr ^C
fi
 
3.修改默认的Oracle用户的环境变量
export PS1="`/bin/hostname`-> "
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
#export ORA_GRID_HOME=$ORACLE_BASE/product/11.2.0/crs_1
export ORACLE_OWNER=oracle
export ORACLE_SID=tiqs21
export PATH=$PATH:$ORACLE_HOME/bin:$ORA_GRID_HOME/bin:/sbin:/usr/sbin:/bin:/usr/local/bin:.
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export NLS_LANG=american_america.ZHS16GBK
export ORACLE_PATH=/home/oracle
#/usr/local/bin/bash
if [ -t 0 ]; then
stty intr ^C
fi
umask 022
 
#For the Bourne, Bash, or Korn shell, add lines similar to the following to the /etc/profile file:
 
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
        umask 022
fi
 
 
2.6.       配置命名方案
修改/etc/hosts文件内容,设置IP和主机名称。
11.1.1.2    node2
1.1.1.2     node2-priv
11.1.1.12    node2-vip
11.1.1.21    r-cluster-scan
11.1.1.1    node1
1.1.1.1     node1-priv
11.1.1.11    node1-vip
127.0.0.1        localhost       loopback
3存储设置
3.1 HP-UX 存储设置
EVA配置页面:
You must have space available on Automatic Storage Management for Oracle Clusterware files (voting disks and Oracle Cluster Registries), and for Oracle Database files, if you install standalone or Oracle Real Application Clusters Databases. Creating Oracle Clusterware files on block or raw devices is no longer supported for new installations.
Note:
When using Oracle Automatic Storage Management (Oracle ASM) for either the Oracle Clusterware files or Oracle Database files, Oracle creates one Oracle ASM instance on each node in the cluster, regardless of the number of databases.
 
Oracle Clusterware voting disks are used to monitor cluster node status, and Oracle Cluster Registry (OCR) files contain configuration information about the cluster. You can place voting disks and OCR files either in an ASM diskgroup, or on a cluster file system or shared network file system. Storage must be shared; any node that does not have access to an absolute majority of voting disks (more than half) will be restarted
If you create a diskgroup during installation, then it must be at least 2 GB.
If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:
■ All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.
■ Do not specify multiple partitions on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.
■ Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices. They are not supported with Oracle RAC.
 
#ioscan -funN -C disk
# /usr/sbin/insf -e
rx2600#[/dev/rdisk]ioscan -m dsf
Persistent DSF            Legacy DSF(s)
========================================
/dev/pt/pt2               /dev/rscsi/c9t0d0
                         /dev/rscsi/c11t0d0
/dev/rdisk/disk2          /dev/rdsk/c0t0d0
/dev/rdisk/disk3          /dev/rdsk/c2t1d0
/dev/rdisk/disk3_p1       /dev/rdsk/c2t1d0s1
/dev/rdisk/disk3_p2       /dev/rdsk/c2t1d0s2
/dev/rdisk/disk3_p3       /dev/rdsk/c2t1d0s3
/dev/rdisk/disk8          /dev/rdsk/c10t0d1
                         /dev/rdsk/c12t0d1
/dev/rdisk/disk9          /dev/rdsk/c10t0d2
                         /dev/rdsk/c12t0d2
/dev/rdisk/disk18         /dev/rdsk/c10t0d3
                         /dev/rdsk/c12t0d3
/dev/rdisk/disk19         /dev/rdsk/c10t0d4
                         /dev/rdsk/c12t0d4
/dev/rdisk/disk20         /dev/rdsk/c10t0d5
                         /dev/rdsk/c12t0d5
/dev/rdisk/disk21         /dev/rdsk/c10t0d6
                         /dev/rdsk/c12t0d6
rx2600#[/dev/rdisk]
 
 
rx3600#[/dev/rdisk]ioscan -m dsf
Persistent DSF            Legacy DSF(s)
========================================
/dev/rdisk/disk1          /dev/rdsk/c0t0d0
/dev/rdisk/disk1_p1       /dev/rdsk/c0t0d0s1
/dev/rdisk/disk1_p2       /dev/rdsk/c0t0d0s2
/dev/rdisk/disk1_p3       /dev/rdsk/c0t0d0s3
/dev/pt/pt2               /dev/rscsi/c2t0d0
                         /dev/rscsi/c4t0d0
/dev/rdisk/disk3          /dev/rdsk/c1t0d0
/dev/rdisk/disk7          /dev/rdsk/c3t0d1
                         /dev/rdsk/c5t0d1
/dev/rdisk/disk10         /dev/rdsk/c3t0d2
                         /dev/rdsk/c5t0d2
/dev/rdisk/disk19         /dev/rdsk/c3t0d3
                         /dev/rdsk/c5t0d3
/dev/rdisk/disk20         /dev/rdsk/c3t0d4
                         /dev/rdsk/c5t0d4
/dev/rdisk/disk21         /dev/rdsk/c3t0d5
                         /dev/rdsk/c5t0d5
/dev/rdisk/disk22         /dev/rdsk/c3t0d6
                         /dev/rdsk/c5t0d6
 
 
 
保证两个服务器看到的/dev/rdisk下的盘符一致:
由于在两个节点上看的设备名不一致,需使用如下指令在两个节点上手工配置设备名:
#[/dev/rdisk]mknod ora_rac_1 c 13 0x000007
#[/dev/rdisk]mknod ora_rac_2 c 13 0x000008
#[/dev/rdisk]mknod ora_rac_3 c 13 0x000009
#[/dev/rdisk]mknod ora_rac_4 c 13 0x00000a
#[/dev/rdisk]mknod ora_rac_5 c 13 0x00000b
#[/dev/rdisk]mknod ora_rac_6 c 13 0x00000c
 
3.2 AIX 存储设置
1.配置HACMP
1) 创建集群
2) 创建资源组
3) 为资源组增加共享磁盘
4) 配置串口网络为心跳网络
5) 配置IP网络
6) 同步集群配置
7) 测试HACMP集群
2.启动集群
一个节点
# hostname
db01
# smitty clstart                  --启动 hacmp
# l***c -g cluster                 --查看 hacmp的状态
 
第二个节点
# hostname
db02
# smitty clstart
# l***c -g cluster
3.检查集群是否正常启动
# lsvg -o
Datavg
4. 确定OCR、Voting,DataFile为grid:asmadmin
chown grid:admadmin /dev/ your_charactor_device
 
3.3 RHEL存储设置
1.配置多路径软件
在存储上present DISK to 安装RAC的所有HOST并Config Multipaths;
注释以下3行
blacklist {
                devnode "*"
}
 #chkconfig multipathd on
    #service multipathd restart
 
root@scdb10 etc]#            multipath -l |grep dm- |sort
mpath0 (36001438005ded1d60000700005100000) dm-2 HP,HSV450
mpath1 (36001438005ded1d60000700005140000) dm-3 HP,HSV450
 
2.安装ASM包
ASM package Download Website
 
oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm        oracleasm-2.6.18-164.el5-debuginfo-2.0.5-1.el5.x86_64.rpm oracleasm-support-2.1.3-1.el5.x86_64.rpm
oracleasm-2.6.18-164.el5debug-2.0.5-1.el5.x86_64.rpm oracleasmlib-2.0.4-1.el5.x86_64.rpm
 
#excute RPM install sequence
rpm -ivh oracleasm-support-2.1.3-1.el5.x86_64.rpm
rpm -ivh oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm
rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm
3. 配置Configure ASM:
    在两个节点上:
root@scdb9 dev]# service oracleasm configure
Configuring the Oracle ASM library driver.
 
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
 
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
4.   检查状态
root@scdb9 soft]# /etc/init.d/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
 
5.做ASM卷标
两台机上:
# sfdisk –s
/dev/dm-2: 524288000
/dev/dm-3: 524288000
6.在一台机上创建ASM盘:
[root@scdb9 ~]# service oracleasm createdisk DG01D000 /dev/dm-2
Marking disk "DG01D000" as an ASM disk: [ OK ]
root@scdb9 ~]# service oracleasm createdisk CRS01D001 /dev/dm-12
Marking disk "CRS01D001" as an ASM disk: [ OK ]
 
7.在两个节点,执行ASM盘扫描
 /etc/init.d/oracleasm scandisks
/usr/sbin/oracleasm listdisks
 
修改磁盘属性:
chown grid:asmadmin /dev/dm*
chown grid:asmadmin /dev/mapper/mpath*
 
8.修改重启后的磁盘属性文件
Cd /etc/udev/rules.d
4安装前的检查
4.1HP-UX 安装前的检查
1.额外的检查
#bdf /home/grid
Ensure you have at least 4.5 GB of space for the grid infrastructure for a cluster home (Grid home) This includes Oracle Clusterware and Automatic Storage Management (Oracle ASM) files and log files.
 
#/bdf /tmp (大于1G的TEMP空间)
Ensure that you have at least 1 GB of space in /tmp
 
#add default gateway
route add default 1.1.1.1 1
#vi /etc/rc.config.d/netconf
ROUTE_GATEWAY[0]=15.70.146.254
ROUTE_DESTINATION[0]=default
ROUTE_COUNT[0]=1
 
2.以上步骤执行完后,到Grid的安装目录执行如下指令:
#./runcluvfy.sh stage -pre crsinst -n rx2600,rx3600 -fixup –verbose
login as root
# sh /tmp/CVU_11.2.0.1.0_grid/runfixup.sh
4.2 AIX 安装前的检查
1. 验证内存及设置Swap区
/usr/sbin/lsattr -E -l sys0 -a realmem
 /usr/sbin/lsps -a
 swap –l
2. 临时表空大小
Tmp空间至少需要1G,使用如下命令检查;
df –kg /tmp
如果空间/tmp不足1G,可手动设置环境变量
TEMP=/mount_point/tmp
TMPDIR=/mount_point/tmp
export TEMP TMPDIR
3. 数据库软件分区要求
数据库软件通常安装在/u01下,建议在系统中创建独立的分区挂载点/u01
其中Grid需12G,Database需9G,建议/u01>=40G;
4. 检查已安装的OS包
lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat \
 bos.perf.perfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte xlC.aix61.rte
验证过程中出现的fix包,即使不安装也不受影响;
# instfix -i -k "IZ41855 IZ51456 IZ52319"
 There was no data for IZ41855 in the fix database.
 There was no data for IZ51456 in the fix database.
     There was no data for IZ52319 in the fix database.
 
4.3          RHEL 安装前的检查
1.查看Linux系统版本信息:
# cat /proc/version
 
2.检查Linux系统软件包的安装情况:
# rpm -q package_name
如rpm –q gcc
4.3 公共的安装前检查
注:其他的检查选项:
#用oracle用户在有安装盘的node中检查网络连接配置是否正确
/app/clusterware/cluvfy/runcluvfy.sh comp nodecon –node1,node2 -verbose
 
#用oracle用户在有安装盘的node中检查硬件和操作系统是否合适:
/app/clusterware/cluvfy/runcluvfy.sh stage -post hwos -n node1,node2 -verbose
 
#用oracle用户在有安装盘的node中检查有效的共享存储:
/app/clusterware/cluvfy/runcluvfy.sh comp ssa -n node1,node2 -s /dw/dsk/c1t2d3,/dw/dsk/c2t4d5
 
#oracle用户检查是否满足安装clusterware:
/app/clusterware/cluvfy/runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose
 
#以oracle用户检查安装oracle software的条件是否具备:
$/app/clusterware/cluvfy/runcluvfy.sh stage -pre dbinst -n node1,node2 -verbose
 
#oracle用户查看当前安装情况是否满足RAC db的创建
$/app/clusterware/cluvfy/runcluvfy.sh stage -pre dbcfg -n node1,node2 -d /oracle/product/Oracel -verbose
5安装Oracel Grid Infrastrue
如果在IBM上安装,先执行预安装脚本:
login as root ,then run rootpre.sh on all nodes;
#su – grid
Bourne or Korn shell:
$ DISPLAY=local_host:0.0 ; export DISPLAY
C shell:
% setenv DISPLAY local_host:0.0
安装选项:
安装类型选择:
选择语言:
配置SCAN:
配置Cluster节点信息:
选择特定的网络接口:
 
存储选择:
创建ASM 磁盘组:
特定的管理组:
选择安装位置:
选择特定的OraInventory:
执行预安装检查:
安装汇总:
按提示执行脚步:
 
若要创建额外的磁盘组:
Login as grid user:
$asmca
注意需配置 asm &database compatibility
ALTER DISKGROUP DATADG1 MODIFY TEMPLATE DATAFILE ATTRIBUTES(FINE);
ALTER DISKGROUP DATADG1 MODIFY TEMPLATE TEMPFILE ATTRIBUTES(FINE);
 
6安装Oracle Database
#su - oracle
./runInstaller
配置安全更新:
安装选项:
Grid安装选项:
安装类型选择:
选择语言:
选择数据库版本:
选择安装位置:
选择特定管理组:
预安装检查:
安装汇总:
按提示执行脚本:
#创建数据库
#dbca &
 
7卸载Oracle软件
1.首先运行deconfig:
/u01/app/crs_home/crs/install/rootcrs.pl -deconfig –force
2.清除存储的磁盘头信息
dd if=/dev/zero of=/dev/dm-9 bs=8192 count=16384
dd if=/dev/zero of=/dev/dm-10 bs=8192 count=16384
dd if=/dev/zero of=/dev/dm-11 bs=8192 count=16384
3.删除相关目录和文件
rm -rf /var/opt/oracle
rm -rf /u01/app/*
rm -rf /tmp/.oracle
rm -rf /tmp/OraInstall*
rm -rf /etc/oratab
rm -rf /opt/oracle
4.重建相应的目录
mkdir -p /u01/app/crs_base
mkdir -p /u01/app/crs_home
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R root:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chown -R grid:oinstall /u01/app/crs*
chmod -R 775 /u01
chmod -R 755 /u01/app/crs*
 
5.RHEL需重建ASM磁盘
service oracleasm createdisk CRS01D001 /dev/dm-12
chown grid:asmadmin /dev/dm*
 
8   RAC 的简单管理
1.状态检查:
rx2600-> su - grid -c "crs_stat -t -v"
 
Name            Type           R/RA   F/FT   Target    State     Host       
----------------------------------------------------------------------
ora.CRS1.dg     ora....up.type 0/5    0/     ONLINE    ONLINE    rx2600     
ora.DATA1.dg    ora....up.type 0/5    0/     ONLINE    ONLINE    rx2600     
ora....ER.lsnr ora....er.type 0/5     0/     ONLINE    ONLINE    rx2600     
ora....N1.lsnr ora....er.type 0/5     0/0    ONLINE    ONLINE    rx2600     
ora.asm         ora.asm.type   0/5    0/     ONLINE    ONLINE    rx2600     
ora.eons        ora.eons.type 0/3    0/     ONLINE    ONLINE    rx2600     
ora.gsd         ora.gsd.type   0/5    0/     OFFLINE   OFFLINE              
ora....network ora....rk.type 0/5     0/     ONLINE    ONLINE    rx2600     
ora.oc4j        ora.oc4j.type 0/5    0/0    OFFLINE   OFFLINE              
ora.ons         ora.ons.type   0/3    0/     ONLINE    ONLINE    rx2600     
ora....SM1.asm application     0/5    0/0    ONLINE    ONLINE    rx2600     
ora....00.lsnr application     0/5    0/0    ONLINE    ONLINE    rx2600     
ora.rx2600.gsd application     0/5    0/0    OFFLINE   OFFLINE              
ora.rx2600.ons application     0/3    0/0    ONLINE    ONLINE    rx2600     
ora.rx2600.vip ora....t1.type 0/0     0/0    ONLINE    ONLINE    rx2600     
ora....SM2.asm application     0/5    0/0    ONLINE    ONLINE    rx3600     
ora....00.lsnr application     0/5    0/0    ONLINE    ONLINE    rx3600     
ora.rx3600.gsd application     0/5    0/0    OFFLINE   OFFLINE              
ora.rx3600.ons application     0/3    0/0    ONLINE    ONLINE    rx3600     
ora.rx3600.vip ora....t1.type 0/0     0/0    ONLINE    ONLINE    rx3600     
ora.scan1.vip ora....ip.type 0/0     0/0    ONLINE    ONLINE    rx2600
 
 
2.验证集群化数据库已开启
$ su - grid -c "crsctl status resource -w \"TYPE co 'ora'\" -t"
 
rx2600-> su - grid -c "crsctl status resource -w \"TYPE co 'ora'\" -t"
 
 
--------------------------------------------------------------------------------
NAME            TARGET STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS1.dg
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
ora.DATA1.dg
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
ora.LISTENER.lsnr
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
ora.asm
               ONLINE ONLINE       rx2600                   Started            
               ONLINE ONLINE       rx3600                   Started            
ora.eons
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
ora.gsd
               OFFLINE OFFLINE      rx2600                                      
               OFFLINE OFFLINE      rx3600                                      
ora.net1.network
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
ora.ons
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE ONLINE       rx2600                                      
ora.dbrac.db
      1        ONLINE ONLINE       rx3600                   Open               
      2        ONLINE ONLINE       rx2600                   Open               
ora.oc4j
      1        OFFLINE OFFLINE                                                  
ora.rx2600.vip
      1        ONLINE ONLINE       rx2600                                      
ora.rx3600.vip
      1        ONLINE  ONLINE       rx3600                                      
ora.scan1.vip
      1        ONLINE ONLINE       rx2600                                      
 
3.检查cluster状态:
rx2600-> crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
4.验证数据库状态
rx2600-> srvctl status database -d dbrac
Instance dbrac1 is running on node rx2600
Instance dbrac2 is running on node rx3600
 
rx2600-> srvctl status instance -d dbrac -i dbrac1
Instance dbrac1 is running on node rx2600
rx2600-> srvctl status instance -d dbrac -i dbrac2
Instance dbrac2 is running on node rx3600
5.验证应用状态
rx2600-> srvctl status nodeapps
VIP rx2600-vip is enabled
VIP rx2600-vip is running on node: rx2600
VIP rx3600-vip is enabled
VIP rx3600-vip is running on node: rx3600
Network is enabled
Network is running on node: rx2600
Network is running on node: rx3600
GSD is enabled
GSD is not running on node: rx2600
GSD is not running on node: rx3600
ONS is enabled
ONS daemon is running on node: rx2600
ONS daemon is running on node: rx3600
eONS is enabled
eONS daemon is running on node: rx2600
eONS daemon is running on node: rx3600
 
6.节点应用程序 —(配置)
rx2600-> srvctl config nodeapps
VIP exists.:rx2600
VIP exists.: /rx2600-vip/15.70.146.29/255.0.0.0/lan0
VIP exists.:rx3600
VIP exists.: /rx3600-vip/15.70.146.39/255.0.0.0/lan0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 15801, multicast IP address 234.7.2.206, listening port 2016
 
 
7.数据库 —(配置)
rx2600-> srvctl config database -d dbrac -a
Database unique name: dbrac
Database name: dbrac
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA1/dbrac/spfiledbrac.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: dbrac
Database instances: dbrac1,dbrac2
Disk Groups: DATA1
Services:
Database is enabled
Database is administrator managed
 
8.ASM —(状态和配置)
rx2600-> srvctl status asm
ASM is running on rx2600,rx3600
 
rx2600-> srvctl config asm -a
ASM home: /u01/app/crs_home
ASM listener: LISTENER
ASM is enabled.
 
 
9.TNS 监听器 —(状态和配置)
rx2600-> srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rx2600,rx3600
 
rx2600-> srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
 /u01/app/crs_home on node(s) rx3600,rx2600
End points: TCP:1521
rx2600->
 
10.SCAN —(状态和配置)
rx2600-> srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rx2600
 
rx2600-> srvctl config scan
SCAN name: rx-cluster-scan, Network: 1/15.0.0.0/255.0.0.0/lan0
SCAN VIP name: scan1, IP: /rx-cluster-scan/15.70.146.11