VMware 9.0+redhat 6.5(rhel-server-6.5-x86_x64)+oracle11gr2 rac
很久没有关注ORACLE了,官方发行版已经更新到12c了,考虑到目前大部分用户还在使用11g rac,今天在虚拟机上安装红帽最新版redhat6.5,并克隆,安装11G grid以及数据库,安装过程中有很多故事,搜集了很多其他网友的文章,记录了一些关键步骤与报错信息,以备以后之用。
主要是ASM过程中使用udev或者UEK内核,关于OEL内核,有的观点认为
全称为Oracle Enterprise Linux,简称OEL,是Oracle公司在2006年初发布第一个版本,Linux发行版本之一,以对Oracle软件和硬件支持较好见长。OEL,一般人通常叫法为Oracle企业版Linux,由于Oracle提供的企业级支持计划UBL(Unbreakable Linux),所以很多人都称OEL为坚不可摧Linux。2010年9月,Oracle Enterprise Linux发布新版内核——Unbreakable Enterprise Kernel,专门针对Oracle 软件与硬件进行优化,最重要的是Oracle数据库跑在OEL上性能可以提升超过75%
有的认为
oracle改了很多东西,客户可能会有要求。
关键步骤:
1.在虚拟机下创建共享盘
在虚拟机软件的安装目录下,有个vmware-vdiskmanager.exe文件
vmware-vdiskmanager.exe -c -s 1024Mb -a lsilogic -t 2 "D:\vmware\redhat\sharedisk\ocr_vote.vmdk
vmware-vdiskmanager.exe -c -s 1024Mb -a lsilogic -t 2 "D:\vmware\redhat\sharedisk\fra.vmdk
vmware-vdiskmanager.exe -c -s 3072Mb -a lsilogic -t 2 "D:\vmware\redhat\sharedisk\data.vmdk
2.在虚拟机中添加硬盘,总线分别设成scsi1:0,scsi2:0,scsi3:0
3.分别打开两台虚拟机目录中的vmx文件,在最后一行添加:
disk.locking="FALSE"
scsi1:0.SharedBus="Virtual"
scsi2:0.SharedBus="Virtual"
scsi3:0.SharedBus="Virtual"
3.创建ASM磁盘组
但是发现ORACLE官方提供的asmlib都是基于2.6.18内核,也就是redhat5内核的,没有提供redhat6内核的版本。
[root@localhost ~]# uname -rm
2.6.32-431.el6.x86_64 x86_64
ORACLE提供的都是2.6.18的
Oracle ASMLib 2.0
Intel IA32 (x86) Architecture
Library and Tools
oracleasm-support-2.1.8-1.el5.i386.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
Drivers for kernel 2.6.18-371.3.1.el5
oracleasm-2.6.18-371.3.1.el5xen-2.0.5-1.el5.i686.rpm
oracleasm-2.6.18-371.3.1.el5debug-2.0.5-1.el5.i686.rpm
oracleasm-2.6.18-371.3.1.el5PAE-2.0.5-1.el5.i686.rpm
oracleasm-2.6.18-371.3.1.el5-2.0.5-1.el5.i686.rpm
oracleasm最新支持到oracleasm-2.6.18-238.9.1.el5
興致勃勃裝個rhel 6.5玩11gRAC,卻找不到rhel6 內核2.6.32的asm包,有些掃興.
ocfs2也沒有支持rhel6的包
在Red Hat Enterprise Linux (RHEL)6以前,Oracle均是使用ASMLib这个内核支持库配置ASM。
ASMLIB是一种基于Linux module,专门为Oracle Automatic Storage Management特性设计的内核支持库(kernel support library)。
但是,在2011年5月,甲骨文发表了一份Oracle数据库ASMLib的声明,声明中称甲骨文将不再提供Red Hat Enterprise Linux (RHEL)6的ASMLib和相关更新。
甲骨文在这份声明中表示,ASMLib更新将通过Unbreakable Linux Network (ULN)来发布,并仅对Oracle Linux客户开放。ULN虽然为甲骨文和红帽的客户服务,但如果客户想要使用ASMlib,就必须使用Oracle的kernel来替换掉红帽的。
Software Update Policy for ASMLib running on future releases of Red Hat Enterprise Linux
Red Hat Enterprise Linux 6 (RHEL6)
For RHEL6 or Oracle Linux 6, Oracle will only provide ASMLib software and updates when configured Unbreakable Enterprise Kernel (UEK). Oracle will not provide ASMLib packages for kernels distributed by Red Hat as part of RHEL 6 or the Red Hat compatible kernel in Oracle Linux 6. ASMLib updates will be delivered via Unbreakable Linux Network(ULN) which is available to customers with Oracle Linux support. ULN works with both Oracle Linux or Red Hat Linux installations, but ASMlib usage will require replacing any Red Hat kernel with UEK
Oracle 的 ASMLib 已经没有看到支持 Redhat Enterprise Linux 6 系列的 ASMLlib 包了, 前面的新闻也说,Oracle 不会为Redhat Linux 6 系统提供此包。 11gR2 的 RAC 安装中 OCR, Voting Disk 已经不能使用RAW,只能使用ASM。 那么Redhat Enterprise Linux 6 系列,如何才能安装Oracle 11gR2 RAC 呢?
用UDEV来创建ASM磁盘组
udev简介
什么是 udev
udev 是 Linux2.6 内核里的一个功能,它替代了原来的 devfs,成为当前 Linux 默认的设备管理工具。udev 以守护进程的形式运行,通过侦听内核发出来的 uevent 来管理 /dev目录下的设备文件。不像之前的设备管理工具,udev 在用户空间 (user space) 运行,而不在内核空间 (kernel space) 运行。
/dev/sdd
scsi_id --whitelisted --replace-whitespace --device=/dev/sdb
scsi_id --whitelisted --replace-whitespace --device=/dev/sdc
scsi_id --whitelisted --replace-whitespace --device=/dev/sdd
创建配置文件
/etc/udev/rules.d/99-oracle-asmdevices.rules
for i in b c d;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmdba\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done
生效配置文件
/sbin/start_udev
使用vmware,需要在vmx文件中加入: disk.EnableUUID = "TRUE",否则UUID出不来
后来发现,每次启动共享存储同时只能被一台机器挂载
原来/sbin/start_udev
fdisk -l 虚拟盘就会消失
到CREAT ASM DISK GROUP页面时,选择change discovery path就会出现asm盘
11.考虑不用udev,采用UEK内核
二、安装UEK核心
UEk可以从http://public-yum.oracle.com/下载安装:
[root@ora ~]# wget http://public-yum.oracle.com/rep ... 3.el6uek.x86_64.rpm
[root@ora ~]# wget http://public-yum.oracle.com/rep ... 3.el6uek.noarch.rpm
[root@ora ~]# wget http://public-yum.oracle.com/rep ... .5-1.el6.x86_64.rpm
[root@ora ~]# wget http://download.oracle.com/otn_s ... .4-1.el6.x86_64.rpm
[root@ora Downloads]# rpm -ivhkernel-uek-firmware-2.6.39-300.17.3.el6uek.noarch.rpm
[root@ora Downloads]# rpm -ivhkernel-uek-2.6.39-300.17.3.el6uek.x86_64.rpm
[root@ora Downloads]# rpm -ivhoracleasm-support-2.1.5-1.el6.x86_64.rpm
[root@ora Downloads]# rpm -ivhoracleasmlib-2.0.4-1.el6.x86_64.rpm
kernel-uek-2.6.32-431
五、创建ASM Disk Volumes
5.1配置并装载ASM核心模块
[root@ora ~]# oracleasm configure -i
Configuringthe Oracle ASM library driver.
Thiswill configure the on-boot properties of the Oracle ASM library
driver. The following questions will determinewhether the driver is
loadedon boot and what permissions it will have. The current values
willbe shown in brackets ('[]'). Hitting without typing an
answerwill keep that current value. Ctrl-Cwill abort.
Defaultuser to own the driver interface []: grid
Defaultgroup to own the driver interface []: asmadmin
StartOracle ASM library driver on boot (y/n) [n]: y
Scanfor Oracle ASM disks on boot (y/n) [y]: y
WritingOracle ASM library driver configuration: done
[root@ora ~]# oracleasm init
Creating/dev/oracleasm mount point: /dev/oracleasm
Loadingmodule "oracleasm": oracleasm
MountingASMlib driver filesystem: /dev/oracleasm
5.2创建ASM磁盘
对磁盘需要新进行分区,oracleasm configure -i之后需要重启
[root@ora ~]# oracleasm createdisk CRSVOL1 /dev/sdb1
Writingdisk header: done
Instantiatingdisk: done
[root@ora ~]# oracleasm createdisk DATAVOL1 /dev/sdc1
Writingdisk header: done
Instantiatingdisk: done
[root@ora ~]# oracleasm createdisk FRAVOL1 /dev/sdd1
Writingdisk header: done
Instantiatingdisk: done
[root@node1 ~]# oracleasm createdisk CRSVOL1 /dev/sdd1
Writing disk header: done
Instantiating disk: failed
Clearing disk header: done
内核版本要和UEK版本一致但我只找到了 2.6.39-300的包,而6.5的内核为 2.6.32-431
要禁用Firewall 和SElinux
[root@ora ~]# oracleasm listdisks
CRSVOL1
DATAVOL1
DATAVOL2
FRAVOL1
dbc使用oracleasm-discover查找ASM磁盘,所以先运行oracleasm-discover查看是否能找到刚创建的4个磁盘
[root@ora ~]# oracleasm-discover
UsingASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASMLibrary - Generic Linux, version 2.0.4 (KABI_V2)]
Discovereddisk: ORCL:CRSVOL1 [2096753 blocks (1073537536 bytes), maxio 512]
Discovereddisk: ORCLATAVOL1 [41940960 blocks (21473771520 bytes), maxio 512]
Discovereddisk: ORCLATAVOL2 [41940960 blocks (21473771520 bytes), maxio 512]
Discovereddisk: ORCL:FRAVOL1 [62912480 blocks (32211189760 bytes), maxio 512]
通过linux提供的 dmesg 和 strace 来定位问题
[root@dga01 ~]# dmesg
sd 2:0:1:0: [sdb] Cache data unavailable
sd 2:0:1:0: [sdb] Assuming drive cache: write through
sd 2:0:1:0: [sdb] Attached SCSI disk
sd 3:0:0:0: [sde] Cache data unavailable
sd 3:0:0:0: [sde] Assuming drive cache: write through
sd 3:0:0:0: [sde] Attached SCSI disk
sd 2:0:3:0: [sdd] Cache data unavailable
sd 2:0:3:0: [sdd] Assuming drive cache: write through
sd 2:0:3:0: [sdd] Attached SCSI disk
EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: (null)
dracut: Mounted root filesystem /dev/sda3
dracut: Loading SELinux policy
type=1404 audit(1363446394.257:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295
SELinux: 2048 avtab hash slots, 250819 rules.
SELinux: 2048 avtab hash slots, 250819 rules.
SELinux: 9 users, 12 roles, 3762 types, 187 bools, 1 sens, 1024 cats
SELinux: 81 classes, 250819 rules
SELinux: Permission audit_access in class file not defined in policy.
SELinux: Permission audit_access in class dir not defined in policy.
SELinux: Permission execmod in class dir not defined in policy.
SELinux: Permission audit_access in class lnk_file not defined in policy.
SELinux: Permission open in class lnk_file not defined in policy.
SELinux: Permission execmod in class lnk_file not defined in policy.
SELinux: Permission audit_access in class chr_file not defined in policy.
SELinux: Permission audit_access in class blk_file not defined in policy.
SELinux: Permission execmod in class blk_file not defined in policy.
SELinux: Permission audit_access in class sock_file not defined in policy.
SELinux: Permission execmod in class sock_file not defined in policy.
SELinux: Permission audit_access in class fifo_file not defined in policy.
SELinux: Permission execmod in class fifo_file not defined in policy.
SELinux: Permission syslog in class capability2 not defined in policy.
SELinux: the above unknown classes and permissions will be allowed
[root@dga01 ~]# strace -f -o asm.out /usr/sbin/oracleasm createdisk OCR /dev/sde1
3714 brk(0) = 0x1677000
3714 brk(0x1698000) = 0x1698000
3714 stat("/dev/sde1", {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 65), ...}) = 0
3714 open("/dev/sde1", O_RDWR) = 4
3714 fstat(4, {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 65), ...}) = 0
3714 fstat(4, {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 65), ...}) = 0
3714 mknod("/dev/oracleasm/disks/OCR", S_IFBLK|0600, makedev(8, 65)) = -1 EACCES (Permission denied)
3714 write(2, "oracleasm-instantiate-disk: ", 28) = 28
3714 write(2, "Unable to create ASM disk \"OCR\":"..., 51) = 51
3714 close(4)
日志中多次提到selinux 和 3714 mknod("/dev/oracleasm/disks/OCR", S_IFBLK|0600, makedev(8, 65)) = -1 EACCES (Permission denied)
问题可能出在selinux或者防火墙上,查看selinux和防火墙状态
[root@dga01 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@dga01 ~]# getenforce
Enforcing
iptables 与selinux均为开启 ,尝试关闭着两个服务
关闭linux 防火墙
[root@dga01 ~]# iptables -F
[root@dga01 ~]# service iptables stop
iptables: Flushing firewall rules: [ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Unloading modules: [ OK ]
[root@dga01 log]# chkconfig iptables off
关闭selinux 服务
[root@dga01 ~]# setenforce 0
编辑selinux配置文件修改 为SELINUX=disabled
[root@dga01 ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
再次查看linux防火墙与selinux服务
[root@dga01 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@dga01 log]# getenforce
Permissive
重新创建ASM磁盘组,顺利完成
[root@dga01 ~]# oracleasm createdisk OCR /dev/sde1
Writing disk header: done
Instantiating disk: done
4.设置hosts文件
# Public Network eth0
192.168.189.128 node1.rac.com node1
192.168.189.129 node2.rac.com node2
# Virtual IP
192.168.189.126 node1-vip.rac.com node1-vip
192.168.189.127 node2-vip.rac.com node2-vip
# Private Network eth1
192.168.189.130 node1-priv.rac.com node1-priv
192.168.189.131 node2-priv.rac.com node2-priv
#SCAN IP
192.168.189。125 scan.rac.com scan
5.检查安装包
binutils-2.17.50.0.6 compat-libstdc++-33-3.2.3 compat-libstdc++-33-3.2.3 (32 bit) elfutils-libelf-0.125 elfutils-libelf-devel-0.125 gcc-4.1.2 gcc-c++-4.1.2 glibc-2.5-24 glibc-2.5-24 (32 bit) glibc-common-2.5 glibc-devel-2.5 glibc-devel-2.5 (32 bit) glibc-headers-2.5 ksh-20060214 libaio-0.3.106 libaio-0.3.106 (32 bit) libaio-devel-0.3.106 libaio-devel-0.3.106 (32 bit) libgcc-4.1.2 libgcc-4.1.2 (32 bit)
libstdc++-4.1.2 libstdc++-4.1.2 (32 bit) libstdc++-devel 4.1.2 make-3.81 numactl-devel-0.9.8.x86_64 sysstat-7.0.2 unixODBC-2.2.11 unixODBC-2.2.11 (32 bit) unixODBC-devel-2.2.11 unixODBC-devel-2.2.11 (32 bit)
2.2.2 通过 yum 源一键安装 x64 的包
# yum -y install binutils* compat* elfutils* gcc* glibc* libaio* libgcc* libstdc* numactl* sysstat* unixODBC* make* ksh*
2.2.3 上传并安装 x32 的包
rpm –ivh unixODBC* compat* glibc* lib*
安装时可使用 RHEL 6.5 DVD 做本地 YUM 源
适用于 RHEL 6.5 32位 和 64位 系统.
我使用的是 VMware 虚拟机, 将 DVD 设置为连接, 进入系统后, 系统会将DVD挂载在 "/media/RHEL_6.5 x86_64 Disc 1" 目录.
卸载先:
umount /media/RHEL_6.5\ x86_64\ Disc\ 1/
复制代码
创建相关目录:
mkdir /mnt/cdrom
复制代码
然后将DVD挂载到 /mnt/cdrom 目录:
mount /dev/cdrom /mnt/cdrom
复制代码
如果使用 iso 文件, 先将 iso 上传到服务器, 例如上传到以下目录 /data/src/rhel/6/rhel-server-6.5-x86_64-dvd.iso , 使用以下命令挂载DVD iso
mount -o loop /data/src/rhel/6/rhel-server-6.5-x86_64-dvd.iso /mnt/cdrom
复制代码
生成 YUM 源文件:
cat > /etc/yum.repos.d/rhel6.repo <<eof< span="" style="word-wrap: break-word;">
[rhel6]
name=rhel6
baseurl=file:///mnt/cdrom
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
EOF
复制代码
sed -i "s#remote = url + '/' + relative#remote = '/mnt/cdrom' + '/' + relative#g" /usr/lib/python2.6/site-packages/yum/yumRepo.py
复制代码
导入rpm的签名信息
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
复制代码
清除缓存
yum clean all
复制代码
如果出现以下错误提示
[root@localhost ~]# yum clean all
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Existing lock /var/run/yum.pid: another copy is running as pid 2267.
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: PackageKit
Memory : 48 M RSS (365 MB VSZ)
Started: Sat Nov 23 01:28:11 2013 - 10:00 ago
State : Sleeping, pid: 2267
先 Kill 掉 YUM
kill -9 2267
复制代码
然后再
yum clean all
复制代码
至此, 本地源配置完毕.
6.配置ssh 免密码登陆
ssh-keygen -t rsa
ssh-copy-id -i /home/grid/.ssh/id_rsa.pub grid@192.168.189.128
ssh-copy-id -i /home/grid/.ssh/id_rsa.pub grid@192.168.189.129
7.su - root
xhost +
否则会出现如下报错
08-31PM. Please wait ...[grid@node1 grid]$ No protocol specified
Exception in thread "main" java.lang.NoClassDefFoundError
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:164)
at java.awt.Toolkit$2.run(Toolkit.java:821)
at java.security.AccessController.doPrivileged(Native Method)
at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:804)
at com.jgoodies.looks.LookUtils.isLowResolution(Unknown Source)
at com.jgoodies.looks.LookUtils.(Unknown Source)
at com.jgoodies.looks.plastic.PlasticLookAndFeel.(PlasticLookAndFeel.java:122)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:242)
at javax.swing.SwingUtilities.loadSystemClass(SwingUtilities.java:1783)
at javax.swing.UIManager.setLookAndFeel(UIManager.java:480)
at oracle.install.commons.util.Application.startup(Application.java:758)
at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:164)
at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181)
at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:265)
at oracle.install.ivw.crs.driver.CRSInstaller.startup(CRSInstaller.java:96)
at oracle.install.ivw.crs.driver.CRSInstaller.main(CRSInstaller.java:103)
8.su - grid
安装grid
9.安装时会检查两个机器的eth端口名字是否一致
两边eth接口要一样
eht2 改成 eht1
如果不一致修改/etc/udev/rules.d/70-persistent-net.rules
10.安装前的检测脚本
[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -fixup -n node1,node2 -verbose
检查集群状态
crs_stat -t
asmca
dbca
netca
select instance_name,status from v$instance;
18. 数据库管理工作
? RAC的启停
oracle rac默认会开机自启动,如需维护时可使用以下命令:
? 关闭:
crsctl stop cluster 停止本节点集群服务
crsctl stop cluster –all 停止所有节点服务
? 开启
crsctl start cluster 开启本节点集群服务
crsctl stop cluster –all 开启所有节点服务
注:以上命令需以 root用户执行
? RAC检查运行状况
以grid 用户运行
[grid@rac1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
? Database检查例状态
[oracle@rac1 ~]$ srvctl status database -d orcl
Instance rac1 is running on node rac1
Instance rac2 is running on node rac2
? 检查节点应用状态及配置
[oracle@rac1 ~]$ srvctl status nodeapps
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
Network is enabled
Network is running on node: rac1
Network is running on node: rac2
GSD is disabled
GSD is not running on node: rac1
GSD is not running on node: rac2
ONS is enabled
ONS daemon is running on node: rac1
ONS daemon is running on node: rac2
eONS is enabled
eONS daemon is running on node: rac1
eONS daemon is running on node: rac2
[oracle@rac1 ~]$ srvctl config nodeapps -a -g -s -l
-l homeion has been deprecated and will be ignored.
VIP exists.:rac1
VIP exists.: /rac1-vip/10.160.1.106/255.255.255.0/eth0
VIP exists.:rac2
VIP exists.: /rac2-vip/10.160.1.107/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
Name: LISTENER
Network: 1, Owner: grid
Home:
/oracle/11.2.0/grid on node(s) rac2,rac1
End points: TCP:1521
? 查看数据库配置
[oracle@rac1 ~]$ srvctl config database -d orcl -a
Database unique name: orcl
Database name: orcl.lottemart.cn
Oracle home: /oracle/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +ORCL_DATA/orcl/spfileorcl.ora
Domain: idevelopment.info
Start homeions: open
Stop homeions: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: rac1,rac2
Disk Groups: DATA,FLASH
Services:
Database is enabled
Database is administrator managed
? 检查 ASM状态及配置
[oracle@rac1 ~]$ srvctl status asm
ASM is running on rac1,rac2
[oracle@rac1 ~$ srvctl config asm -a
ASM home: /oracle/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
? 检查 TNS的状态及配置
[oracle@rac1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac1,rac2
[oracle@rac1 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: /oracle/11.2.0/grid on node(s) rac2,rac1
End points: TCP:1521
? 检查 SCAN 的状态及配置
[oracle@rac1 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac1
[oracle@rac1 ~]$ srvctl config scan
SCAN name: rac-cluster-scan.rac.localdomain, Network:
1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP:
/rac-cluster-scan.rac.localdomain
? 检查 VIP的状态及配置
[oracle@rac1 ~]$ srvctl status vip -n rac1
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
[oracle@rac1 ~]$ srvctl status vip -n rac2
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
[oracle@rac1 ~]$ srvctl config vip -n rac1
VIP exists.:rac1
VIP exists.: /rac1-vip/192.168.11.14/255.255.255.0/eth0
[oracle@rac1 ~]$ srvctl config vip -n rac2
VIP exists.:rac2
VIP exists.: /rac2-vip/192.168.15/255.255.255.0/eth0
7.1 Verifying Cluster Database All Informations
[grid@11grac1 grid]$ crsctl status resource
NAME=ora.11grac1.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on 11grac1
NAME=ora.11grac2.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on 11grac2
NAME=ora.CRS.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.DATA.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.FRA.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.LISTENER.lsnr TYPE=ora.listener.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.LISTENER_SCAN1.lsnr TYPE=ora.scan_listener.type TARGET=ONLINE STATE=ONLINE on 11grac2
NAME=ora.LISTENER_SCAN2.lsnr TYPE=ora.scan_listener.type TARGET=ONLINE STATE=ONLINE on 11grac1
NAME=ora.asm TYPE=ora.asm.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.eons TYPE=ora.eons.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.gsd TYPE=ora.gsd.type TARGET=ONLINE , ONLINE
STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.net1.network TYPE=ora.network.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.oc4j TYPE=ora.oc4j.type TARGET=ONLINE STATE=ONLINE on 11grac1
NAME=ora.ons TYPE=ora.ons.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.racdb.db TYPE=ora.database.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.registry.acfs TYPE=ora.registry.acfs.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.scan1.vip TYPE=ora.scan_vip.type TARGET=ONLINE STATE=ONLINE on 11grac2
NAME=ora.scan2.vip TYPE=ora.scan_vip.type TARGET=ONLINE STATE=ONLINE on 11grac1
[grid@11grac1 ~]$ cluvfy comp scan -verbose
Verifying scan
Checking Single Client Access Name (SCAN)...
SCAN VIP name Node Running? ListenerName Port Running?
---------------- ------------ ------------ ------------ ------------ ------------
scanvip 11grac2 true LISTENER 1521 true
Checking name resolution setup for "scanvip"...
SCAN Name IP Address Status Comment
------------ ------------------------ ------------------------ ----------
scanvip 192.168.60.15 passed scanvip 192.168.60.16 passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful.
7.2 Verifying Clock Synchronization across the Cluster
Nodes
[grid@11grac1 grid]$ cluvfy comp clocksync -verbose
Verifying Clock Synchronization across the cluster nodes
Checking if Clusterware is installed on all nodes... Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes... Check: CTSS Resource running on all nodes Node Name Status ------------------------------------ ------------------------ 11grac1 passed Result: CTSS resource check passed
Querying CTSS for time offset on all nodes... Result: Query of CTSS for time offset passed
Check CTSS state started... Check: CTSS state Node Name State ------------------------------------ ------------------------ 11grac1 Active CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs Check: Reference Time Offset Node Name Time Offset Status ------------ ------------------------ ------------------------ 11grac1 0.0 passed
Time offset is within the specified limits on the following set of nodes: "[11grac1]" Result: Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Verification of Clock Synchronization across the cluster nodes
was successful
7.3 Check the Health of the Cluster
[grid@11grac1 grid]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
7.4 Check All Database Status
[oracle@11grac1 ~]$ srvctl status database -d racdb Instance racdb1 is running on node 11grac1 Instance racdb2 is running on node 11grac2
[oracle@11grac1 ~]$ srvctl status instance -d racdb -i racdb1 Instance racdb1 is running on node 11grac1
[oracle@11grac1 ~]$ srvctl status instance -d racdb -i racdb2 Instance racdb2 is running on node 11grac2
7.5 Check Node Application Status/Configuration
[oracle@11grac1 ~]$ srvctl status nodeapps VIP oravip1 is enabled VIP oravip1 is running on node: 11grac1 VIP oravip2 is enabled VIP oravip2 is running on node: 11grac2 Network is enabled Network is running on node: 11grac1 Network is running on node: 11grac2 GSD is enabled GSD is running on node: 11grac1 GSD is running on node: 11grac2 ONS is enabled ONS daemon is running on node: 11grac1 ONS daemon is running on node: 11grac2 eONS is enabled eONS daemon is running on node: 11grac1 eONS daemon is running on node: 11grac2
[oracle@11grac1 ~]$ srvctl config nodeapps VIP exists.:11grac1 VIP exists.: /oravip1/192.168.60.13/255.255.255.0/eth0 VIP exists.:11grac2 VIP exists.: /oravip2/192.168.60.14/255.255.255.0/eth0 GSD exists. ONS daemon exists. Local port 6100, remote port 6200 eONS daemon exists. Multicast port 17385, multicast IP address 234.137.253.253, listening port 2016
7.6 List All Configured Database
[oracle@11grac1 ~]$ srvctl config database racdb
[oracle@11grac1 ~]$ srvctl config database -d racdb -a Database unique name: racdb Database name: racdb Oracle home: /11grac/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATA/racdb/spfileracdb.ora Domain:
Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1,racdb2 Disk Groups: DATA,FRA Services: Database is enabled Database is administrator managed
7.7 Check ASM Status/Configuration
[oracle@11grac1 ~]$ srvctl status asm ASM is running on 11grac1,11grac2
[oracle@11grac1 ~]$ srvctl config asm -a ASM home: /11grac/app/11.2.0/grid ASM listener: LISTENER ASM is enabled.
7.8 Check TNS Listener Status/Configuration
[oracle@11grac1 ~]$ srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node(s): 11grac1,11grac2
[oracle@11grac1 ~]$ srvctl config listener -a Name: LISTENER Network: 1, Owner: grid Home: /11grac/app/11.2.0/grid on node(s) 11grac2,11grac1 End points: TCP:1521
7.9 Check SCAN Status/Configuration
[oracle@11grac1 ~]$ srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node 11grac2
SCAN VIP scan2 is enabled SCAN VIP scan2 is running on node 11grac1
[oracle@11grac1 ~]$ srvctl config scan SCAN name: scanvip, Network: 1/192.168.60.0/255.255.255.0/eth0 SCAN VIP name: scan1, IP: /scanvip.rac.com/192.168.60.15 SCAN VIP name: scan2, IP: /scanvip.rac.com/192.168.60.16
7.10 Check VIP Status/Configuration
[oracle@11grac1 ~]$ srvctl status vip -n 11grac1 VIP oravip1 is enabled VIP oravip1 is running on node: 11grac1
[oracle@11grac1 ~]$ srvctl status vip -n 11grac2 VIP oravip2 is enabled VIP oravip2 is running on node: 11grac2
[oracle@11grac1 ~]$ srvctl config vip -n 11grac1 VIP exists.:11grac1 VIP exists.: /oravip1/192.168.60.13/255.255.255.0/eth0
[oracle@11grac1 ~]$ srvctl config vip -n 11grac2 VIP exists.:11grac2 VIP exists.: /oravip2/192.168.60.14/255.255.255.0/eth0
7.11 Configuration for Node Application-(VIP、GSD、ONS、
Listener)
[oracle@11grac1 ~]$ srvctl config nodeapps -a -g -s -l -l option has been deprecated and will be ignored. VIP exists.:11grac1 VIP exists.: /oravip1/192.168.60.13/255.255.255.0/eth0 VIP exists.:11grac2 VIP exists.: /oravip2/192.168.60.14/255.255.255.0/eth0 GSD exists. ONS daemon exists. Local port 6100, remote port 6200 Name: LISTENER Network: 1, Owner: grid Home:
/11grac/app/11.2.0/grid on node(s) 11grac2,11grac1 End points: TCP:1521
7.12 Check All Services
[oracle@11grac1 ~]$ su - grid -c "crs_stat -t -v"
Password: Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 ONLINE ONLINE 11grac1 ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE 11grac1 ora....ac1.gsd application 0/5 0/0 ONLINE ONLINE 11grac1 ora....ac1.ons application 0/3 0/0 ONLINE ONLINE 11grac1 ora....ac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE 11grac1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE 11grac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE 11grac2 ora....ac2.gsd application 0/5 0/0 ONLINE ONLINE 11grac2 ora....ac2.ons application 0/3 0/0 ONLINE ONLINE 11grac2 ora....ac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE 11grac2 ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE 11grac1 ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE 11grac1 ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE 11grac1 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE 11grac1 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE 11grac2 ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE 11grac1 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE 11grac1 ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE 11grac1 ora.gsd ora.gsd.type 0/5 0/ ONLINE ONLINE 11grac1 ora....network ora....rk.type 0/5 0/ ONLINE ONLINE 11grac1 ora.oc4j ora.oc4j.type 0/5 0/0 ONLINE ONLINE 11grac1 ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE 11grac1 ora.racdb.db ora....se.type 0/2 0/1 ONLINE ONLINE 11grac1 ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE 11grac1 ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE 11grac2 ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE 11grac1
7.13 Starting the Oracle Clusterware Stack
[root@11grac1 ~]# /11grac/app/11.2.0/grid/bin/crsctl stop cluster (-all)
[root@11grac1 ~]# /11grac/app/11.2.0/grid/bin/crsctl start cluster (-all)
[root@11grac1 ~]# /11grac/app/11.2.0/grid/bin/crsctl start cluster –n 11grac1 11grac2
redhat 6.5(rhel-server-6.5-x86_x64)+oracle11gr2 rac
最新推荐文章于 2024-01-30 09:38:02 发布