概述:
继本人上一个系列的Oracle 11gR2 RAC+DG项目实战教程发布已一年有余,在此期间,那个系列的教程得到众多网友的喜爱与好评,更是在互联网上畅销不衰。为此,众多网友也一直希望本人推出Oracle 10gR2 RAC的系列教程,恰逢前不久,深圳某客户需要将一套Oracle 10gR2双节点RAC生产数据库升级至Oracle 11gR2 RAC,所以便有了本系列实战的视频教程。
为更贴切还原生产过程中的实际操作,本系列教程一个分两大部分:
第一部分,从零开始一步一步搭建一套在OEL 5.5 X86_64位环境下的双节点10gR2 RAC数据库,并将数据库升级至10.2.0.5.0版本。
第二部分,一步一步将10gR2 RAC升级至Oracle 11gR2 RAC,数据库版本选择11g最新的11.2.0.4.0版本。
本文是第二部分:
主要步骤:
一、配置SCAN IP、DNS、停止ntpd服务
二、将oracle用户添加至oper用户组
三、创建grid软件的base、home相关目录
四、创建11gR2 oracle软件的home目录
五、创建11gR2 grid Infrastructure的asm磁盘
六、停止10gR2 RAC软件
七、备份10gR2 RAC软件
八、安装grid软件
九、迁移10g RAC 磁盘组至11gR2 grid
十、安装11gR2 oracle软件
十一、升级10gR2 RAC数据库至11gR2 RAC数据库
1 配置SCAN IP、DNS、停止ntpd服务
DNS服务器配置在此不再赘述,具体可以:
1 查看本人之前的系列视频之《黄伟老师Oracle RAC+DG系列视频+售后安心技术支持服务》第2讲:RAC第一个节点准备工作之:配置网络、DNS服务器。
2 或查看本人系列文章之一步一步在Linux上安装Oracle 11gR2 RAC (2)第2.3节配置DNS服务器,确认SCAN IP可以被解析。
因为我们的双节点服务器指定了oracleonlinux.cn域名,所以具体需要做特别修改的地方,:
A /var/named/chroot/etc/named.rfc1912.zones添加下述内容:
zone "oracleonlinux.cn" IN {
type master;
file "oracleonlinux.cn.zone";
allow-update { none; };
};
B 添加正向解析配置文件:
[root@rdd named]# pwd
/var/named/chroot/var/named
[root@rdd named]# cat oracleonlinux.cn.zone
$TTL 86400
@ IN SOA oracleonlinux.cn root.oracleonlinux.cn (
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS oracleonlinux.cn
localhost IN A 127.0.0.1
scan-cluster IN A 172.16.0.223
[root@rdd named]#
C 修改反向解析配置文件:
[root@rdd named]# pwd
/var/named/chroot/var/named
[root@rdd named]# ll
total 44
-rw-r----- 1 root named 491 Dec 19 11:39 0.16.172.in-addr.arpa
drwxrwx--- 2 named named 4096 Dec 19 10:45 data
-rw-r----- 1 root named 244 Nov 23 14:19 localdomain.zone
-rw-r----- 1 root named 195 Jan 21 2010 localhost.zone
-rw-r----- 1 root named 427 Jan 21 2010 named.broadcast
-rw-r----- 1 root named 1892 Jan 21 2010 named.ca
-rw-r----- 1 root named 424 Jan 21 2010 named.ip6.local
-rw-r----- 1 root named 426 Jan 21 2010 named.local
-rw-r----- 1 root named 427 Jan 21 2010 named.zero
-rw-r----- 1 root named 275 Dec 19 11:39 oracleonlinux.cn.zone
drwxrwx--- 2 named named 4096 Jul 27 2004 slaves
[root@rdd named]# cat 0.16.172.in-addr.arpa
$TTL 86400
@ IN SOA oracleonlinux.cn. root.oracleonlinux.cn. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
IN NS localhost.
1 IN PTR localhost.
223 IN PTR scan-cluster.oracleonlinux.cn.
[root@rdd named]#
D 双节点验证SCAN IP可以解析:
Node1:
Node2:
双节点停止NTP服务:
Node1:
[root@node1 ~]# ll /etc/ntp.conf
-rw-r--r-- 1 root root 1833 Dec 9 2009 /etc/ntp.conf
[root@node1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak
[root@node1 ~]# service ntpd stop
Shutting down ntpd: [FAILED]
[root@node1 ~]# service ntpd status
ntpd is stopped
[root@node1 ~]#
Node2:
[root@node2 ~]# ll /etc/ntp.conf
-rw-r--r-- 1 root root 1833 Dec 9 2009 /etc/ntp.conf
[root@node2 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak
[root@node2 ~]# service ntpd status
ntpd is stopped
[root@node2 ~]# service ntpd stop
Shutting down ntpd: [FAILED]
[root@node2 ~]#
2 将oracle用户添加至oper用户组
双节点上创建oper用户组、并将oracle用户添加至oper组。
Node1:
[root@node1 ~]# id oracle
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba)
[root@node1 ~]# groupadd oper
[root@node1 ~]# usermod -a -G oper oracle
[root@node1 ~]# id oracle
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba),502(oper)
[root@node1 ~]#
Node2:
[root@node2 ~]# id oracle
[root@node2 ~]# id oracle
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba)
[root@node2 ~]# groupadd oper
[root@node2 ~]# usermod -a -G oper oracle
[root@node2 ~]# id oracle
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba),502(oper)
[root@node2 ~]#
3 创建grid软件的base、home相关目录
Note:
这里设置grid软件的oracle_base=/u01/app/grid,grid 软件的oracle_home=/u01/app/11.2.0/grid
需要注意的是,这两个目录是并行的目录,不能像传统的那样将oracle_home设置为oracle_base的子目录!
节点1:
[root@node1 ~]# hostname
node1.oracleonlinux.cn
[root@node1 ~]# mkdir -p /u01/app/grid
[root@node1 ~]# mkdir -p /u01/app/11.2.0/grid
[root@node1 ~]# chown -R oracle:oinstall /u01
[root@node1 ~]# ll /u01/app/
total 12
drwxrwxr-x 3 oracle oinstall 4096 Dec 25 20:19 11.2.0
drwxr-xr-x 2 oracle oinstall 4096 Dec 25 20:18 grid
drwxrwxr-x 5 oracle oinstall 4096 Dec 25 17:25 oracle
[root@node1 ~]#
节点2:
[root@node2 ~]# hostname
node2.oracleonlinux.cn
[root@node2 ~]# mkdir -p /u01/app/grid
[root@node2 ~]# mkdir -p /u01/app/11.2.0/grid
[root@node2 ~]# chown -R oracle:oinstall /u01
[root@node2 ~]# ll /u01/app/
total 24
drwxrwxr-x 3 oracle oinstall 4096 Dec 25 20:21 11.2.0
drwxr-xr-x 2 oracle oinstall 4096 Dec 25 20:21 grid
drwxrwxr-x 5 oracle oinstall 4096 Dec 22 10:17 oracle
[root@node2 ~]#
4 创建11gR2 oracle软件的home目录
Note:
这里设置ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1,ORACLE_BASE还保留之前的10g版本下的设置。
Node1:
node1-> mkdir -p /u01/app/oracle/product/11.2.0/db_1
node1-> ll /u01/app/oracle/product/
total 8
drwxrwxr-x 4 oracle oinstall 4096 Dec 21 23:23 10.2.0
drwxr-xr-x 3 oracle oinstall 4096 Dec 26 09:32 11.2.0
node1->
Node2:
node2-> pwd
/home/oracle
node2-> mkdir -p /u01/app/oracle/product/11.2.0/db_1
node2-> ll /u01/app/oracle/product/
total 16
drwxrwxr-x 4 oracle oinstall 4096 Dec 21 23:32 10.2.0
drwxr-xr-x 3 oracle oinstall 4096 Dec 26 09:34 11.2.0
node2->
5 创建11gR2 grid Infrastructure的asm磁盘
节点1 上将之前配置的/dev/sdf1硬盘分区创建成第3块ASM磁盘,用作将来安装grid软件时来存放OCR和Voting Disk。
Node1:
[root@node1 ~]# /etc/init.d/oracleasm listdisks
ASMDISK1
ASMDISK2
[root@node1 ~]# /etc/init.d/oracleasm querydisk /dev/sd*
Device "/dev/sda" is not marked as an ASM disk
Device "/dev/sda1" is not marked as an ASM disk
Device "/dev/sda2" is not marked as an ASM disk
Device "/dev/sdb" is not marked as an ASM disk
Device "/dev/sdb1" is not marked as an ASM disk
Device "/dev/sdc" is not marked as an ASM disk
Device "/dev/sdc1" is not marked as an ASM disk
Device "/dev/sdd" is not marked as an ASM disk
Device "/dev/sdd1" is marked an ASM disk with the label "ASMDISK1"
Device "/dev/sde" is not marked as an ASM disk
Device "/dev/sde1" is marked an ASM disk with the label "ASMDISK2"
Device "/dev/sdf" is not marked as an ASM disk
Device "/dev/sdf1" is not marked as an ASM disk
[root@node1 ~]# /etc/init.d/oracleasm createdisk asmdisk3 /dev/sdf1
Marking disk "asmdisk3" as an ASM disk: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm listdisks
ASMDISK1
ASMDISK2
ASMDISK3
[root@node1 ~]#
Node2:
[root@node2 ~]# /etc/init.d/oracleasm listdisks
ASMDISK1
ASMDISK2
[root@node2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node2 ~]# /etc/init.d/oracleasm listdisks
ASMDISK1
ASMDISK2
ASMDISK3
[root@node2 ~]#
6 停止10gR2 RAC软件
A 停止数据库
[root@node1 ~]# su - oracle
node1-> crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.devdb.db application ONLINE ONLINE node1
ora....b1.inst application ONLINE ONLINE node1
ora....b2.inst application ONLINE ONLINE node2
ora....c10g.cs application ONLINE ONLINE node1
ora....db1.srv application ONLINE ONLINE node1
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
node1-> srvctl stop service -d devdb
node1-> crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.devdb.db application ONLINE ONLINE node1
ora....b1.inst application ONLINE ONLINE node1
ora....b2.inst application ONLINE ONLINE node2
ora....c10g.cs application OFFLINE OFFLINE
ora....db1.srv application OFFLINE OFFLINE
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
node1-> srvctl stop database -d devdb
node1-> crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.devdb.db application OFFLINE OFFLINE
ora....b1.inst application OFFLINE OFFLINE
ora....b2.inst application OFFLINE OFFLINE
ora....c10g.cs application OFFLINE OFFLINE
ora....db1.srv application OFFLINE OFFLINE
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
node1-> srvctl stop asm -n node1
node1-> srvctl stop asm -n node2
node1-> crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.devdb.db application OFFLINE OFFLINE
ora....b1.inst application OFFLINE OFFLINE
ora....b2.inst application OFFLINE OFFLINE
ora....c10g.cs application OFFLINE OFFLINE
ora....db1.srv application OFFLINE OFFLINE
ora....SM1.asm application OFFLINE OFFLINE
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application OFFLINE OFFLINE
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
node1-> srvctl stop nodeapps -n node1
node1-> srvctl stop nodeapps -n node2
node1-> crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.devdb.db application OFFLINE OFFLINE
ora....b1.inst application OFFLINE OFFLINE
ora....b2.inst application OFFLINE OFFLINE
ora....c10g.cs application OFFLINE OFFLINE
ora....db1.srv application OFFLINE OFFLINE
ora....SM1.asm application OFFLINE OFFLINE
ora....E1.lsnr application OFFLINE OFFLINE
ora.node1.gsd application OFFLINE OFFLINE
ora.node1.ons application OFFLINE OFFLINE
ora.node1.vip application OFFLINE OFFLINE
ora....SM2.asm application OFFLINE OFFLINE
ora....E2.lsnr application OFFLINE OFFLINE
ora.node2.gsd application OFFLINE OFFLINE
ora.node2.ons application OFFLINE OFFLINE
ora.node2.vip application OFFLINE OFFLINE
node1->
B 停止CRS
节点1:
[root@node1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@node1 ~]#
节点2:
[root@node2 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.devdb.db application OFFLINE OFFLINE
ora....b1.inst application OFFLINE OFFLINE
ora....b2.inst application OFFLINE OFFLINE
ora....c10g.cs application OFFLINE OFFLINE
ora....db1.srv application OFFLINE OFFLINE
ora....SM1.asm application OFFLINE OFFLINE
ora....E1.lsnr application OFFLINE OFFLINE
ora.node1.gsd application OFFLINE OFFLINE
ora.node1.ons application OFFLINE OFFLINE
ora.node1.vip application OFFLINE OFFLINE
ora....SM2.asm application OFFLINE OFFLINE
ora....E2.lsnr application OFFLINE OFFLINE
ora.node2.gsd application OFFLINE OFFLINE
ora.node2.ons application OFFLINE OFFLINE
ora.node2.vip application OFFLINE OFFLINE
[root@node2 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@node2 ~]#
7 备份10gR2 RAC软件
A 备份OCR、voting disk
Note:
只需要在其中一个节点备份就可以,因为OCR和Voting Disk是存放在共享存储上的。
这里只在节点1上执行备份。
Node1:
[root@node1 ~]# pwd
/root
[root@node1 ~]# mkdir 10gbackup
[root@node1 ~]# ll /dev/raw/ra*
crw-r----- 1 oracle oinstall 162, 1 Dec 24 16:56 /dev/raw/raw1
crw-r----- 1 oracle oinstall 162, 2 Dec 24 16:59 /dev/raw/raw2
[root@node1 ~]# dd if=/dev/raw/raw1 of=/root/10gbackup/10gocr.bak
262112+0 records in
262112+0 records out
134201344 bytes (134 MB) copied, 104.927 seconds, 1.3 MB/s
[root@node1 ~]# dd if=/dev/raw/raw2 of=/root/10gbackup/10gvotingdisk.bak
262112+0 records in
262112+0 records out
134201344 bytes (134 MB) copied, 96.4558 seconds, 1.4 MB/s
[root@node1 ~]# ll /root/10gbackup/
total 262376
-rw-r--r-- 1 root root 134201344 Dec 24 17:01 10gocr.bak
-rw-r--r-- 1 root root 134201344 Dec 24 17:04 10gvotingdisk.bak
[root@node1 ~]#
B 备份RAC初始化脚本
Note:
这里需要在双节点上备份/etc/inittab配置文件以及
/etc/init.d/init.crs
/etc/init.d/init.crsd
/etc/init.d/init.cssd
/etc/init.d/init.evmd 初始化脚本。
Node1:
[root@node1 10gbackup]# ll
total 262376
-rw-r--r-- 1 root root 134201344 Dec 24 17:01 10gocr.bak
-rw-r--r-- 1 root root 134201344 Dec 24 17:04 10gvotingdisk.bak
[root@node1 10gbackup]# cp /etc/inittab /root/10gbackup/inittab.bak
[root@node1 10gbackup]# cp /etc/init.d/init.crs /root/10gbackup/init.crs.bak
[root@node1 10gbackup]# cp /etc/init.d/init.crsd /root/10gbackup/init.crsd.bak
[root@node1 10gbackup]# cp /etc/init.d/init.cssd /root/10gbackup/init.cssd.bak
[root@node1 10gbackup]# cp /etc/init.d/init.evmd /root/10gbackup/init.evmd.bak
[root@node1 10gbackup]# ll
total 262456
-rw-r--r-- 1 root root 134201344 Dec 24 17:01 10gocr.bak
-rw-r--r-- 1 root root 134201344 Dec 24 17:04 10gvotingdisk.bak
-r-xr-xr-x 1 root root 2436 Dec 24 17:17 init.crs.bak
-r-xr-xr-x 1 root root 5532 Dec 24 17:17 init.crsd.bak
-r-xr-xr-x 1 root root 55174 Dec 24 17:17 init.cssd.bak
-r-xr-xr-x 1 root root 3499 Dec 24 17:17 init.evmd.bak
-rw-r--r-- 1 root root 1869 Dec 24 17:16 inittab.bak
[root@node1 10gbackup]#
Node2:
[root@node2 ~]# cd
[root@node2 ~]# mkdir 10gbackup
[root@node2 ~]# cd 10gbackup/
[root@node2 10gbackup]# ll
total 0
[root@node2 10gbackup]# cp /etc/inittab /root/10gbackup/inittab.bak
[root@node2 10gbackup]# cp /etc/init.d/init.crs /root/10gbackup/init.crs.bak
[root@node2 10gbackup]# cp /etc/init.d/init.crsd /root/10gbackup/init.crsd.bak
[root@node2 10gbackup]# cp /etc/init.d/init.cssd /root/10gbackup/init.cssd.bak
[root@node2 10gbackup]# cp /etc/init.d/init.evmd /root/10gbackup/init.evmd.bak
[root@node2 10gbackup]# ll
total 100
-r-xr-xr-x 1 root root 2436 Dec 24 17:18 init.crs.bak
-r-xr-xr-x 1 root root 5532 Dec 24 17:18 init.crsd.bak
-r-xr-xr-x 1 root root 55174 Dec 24 17:18 init.cssd.bak
-r-xr-xr-x 1 root root 3499 Dec 24 17:18 init.evmd.bak
-rw-r--r-- 1 root root 1869 Dec 24 17:18 inittab.bak
[root@node2 10gbackup]#
C 备份oracle软件、集群软件
可以用OS tar命令来备份
tar -czf /home/oracle/oracle.tar.z $ORACLE_HOME/
这里不再赘述。
D 备份数据库
用RMAN备份数据库。亦不再赘述。
E 移除/etc/oracle
Node1:
[root@node1 ~]# ll /etc/oracle/
total 12
-rw-r--r-- 1 root oinstall 45 Dec 21 22:19 ocr.loc
drwxrwxr-x 5 root root 4096 Dec 24 16:54 oprocd
drwxr-xr-x 3 root root 4096 Dec 21 22:19 scls_scr
[root@node1 ~]# mv /etc/oracle/ /root/10gbackup/etc_oracle
[root@node1 ~]# ll /etc/oracle
ls: /etc/oracle: No such file or directory
[root@node1 ~]#
Node2:
[root@node2 ~]# ll /etc/oracle/
total 24
-rw-r--r-- 1 root oinstall 45 Dec 21 22:21 ocr.loc
drwxrwxr-x 5 root root 4096 Dec 24 16:55 oprocd
drwxr-xr-x 3 root root 4096 Dec 21 22:21 scls_scr
[root@node2 ~]# mv /etc/oracle/ /root/10gbackup/etc_oracle
[root@node2 ~]# ll /etc/oracle/
total 8
drwxrwxr-x 5 root root 4096 Dec 24 17:29 oprocd
[root@node2 ~]# ll /root/10gbackup/etc_oracle/
total 24
-rw-r--r-- 1 root oinstall 45 Dec 21 22:21 ocr.loc
drwxrwxr-x 5 root root 4096 Dec 24 16:55 oprocd
drwxr-xr-x 3 root root 4096 Dec 21 22:21 scls_scr
[root@node2 ~]#
F 移除/etc/init.d/init*
Node1:
[root@node1 10gbackup]# cd
[root@node1 ~]# cd /root/10gbackup/
[root@node1 10gbackup]# ll
total 262460
-rw-r--r-- 1 root root 134201344 Dec 24 17:01 10gocr.bak
-rw-r--r-- 1 root root 134201344 Dec 24 17:04 10gvotingdisk.bak
drwxr-xr-x 4 root oinstall 4096 Dec 21 23:07 etc_oracle
-r-xr-xr-x 1 root root 2436 Dec 24 17:17 init.crs.bak
-r-xr-xr-x 1 root root 5532 Dec 24 17:17 init.crsd.bak
-r-xr-xr-x 1 root root 55174 Dec 24 17:17 init.cssd.bak
-r-xr-xr-x 1 root root 3499 Dec 24 17:17 init.evmd.bak
-rw-r--r-- 1 root root 1869 Dec 24 17:16 inittab.bak
[root@node1 10gbackup]# mkdir init_mv
[root@node1 10gbackup]# ll /etc/init.d/init.*
-r-xr-xr-x 1 root root 2436 Dec 21 23:08 /etc/init.d/init.crs
-r-xr-xr-x 1 root root 5532 Dec 21 23:08 /etc/init.d/init.crsd
-r-xr-xr-x 1 root root 55174 Dec 21 23:08 /etc/init.d/init.cssd
-r-xr-xr-x 1 root root 3499 Dec 21 23:08 /etc/init.d/init.evmd
[root@node1 10gbackup]# mv /etc/init.d/init.* /root/10gbackup/init_mv/
[root@node1 10gbackup]# ll /etc/init.d/init.*
ls: /etc/init.d/init.*: No such file or directory
[root@node1 10gbackup]# ll /root/10gbackup/init_mv/
total 76
-r-xr-xr-x 1 root root 2436 Dec 21 23:08 init.crs
-r-xr-xr-x 1 root root 5532 Dec 21 23:08 init.crsd
-r-xr-xr-x 1 root root 55174 Dec 21 23:08 init.cssd
-r-xr-xr-x 1 root root 3499 Dec 21 23:08 init.evmd
[root@node1 10gbackup]#
Node2:
[root@node2 ~]# cd /root/10gbackup/
[root@node2 10gbackup]# mkdir init_mv
[root@node2 10gbackup]# ll /etc/init.d/init.*
-r-xr-xr-x 1 root root 2436 Dec 21 23:11 /etc/init.d/init.crs
-r-xr-xr-x 1 root root 5532 Dec 21 23:11 /etc/init.d/init.crsd
-r-xr-xr-x 1 root root 55174 Dec 21 23:11 /etc/init.d/init.cssd
-r-xr-xr-x 1 root root 3499 Dec 21 23:11 /etc/init.d/init.evmd
[root@node2 10gbackup]# mv /etc/init.d/init.* /root/10gbackup/init_mv/
[root@node2 10gbackup]# ll /etc/init.d/init.*
ls: /etc/init.d/init.*: No such file or directory
[root@node2 10gbackup]# ll /root/10gbackup/init_mv/
total 92
-r-xr-xr-x 1 root root 2436 Dec 21 23:11 init.crs
-r-xr-xr-x 1 root root 5532 Dec 21 23:11 init.crsd
-r-xr-xr-x 1 root root 55174 Dec 21 23:11 init.cssd
-r-xr-xr-x 1 root root 3499 Dec 21 23:11 init.evmd
[root@node2 10gbackup]#
G 修改/etc/inittab文件
Node1:
[root@node1 10gbackup]# tail -10 /etc/inittab
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6
# Run xdm in runlevel 5
x:5:respawn:/etc/X11/prefdm -nodaemon
#h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null
#h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null
#h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null
[root@node1 10gbackup]#
Node2:
[root@node2 10gbackup]# tail -10 /etc/inittab
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6
# Run xdm in runlevel 5
x:5:respawn:/etc/X11/prefdm -nodaemon
#h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null
#h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null
#h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null
[root@node2 10gbackup]#
H 删除/tmp/.oracle和/var/tmp/.oracle
Node1:
[root@node1 10gbackup]# ll /tmp/.oracle/
total 0
[root@node1 10gbackup]# ll /var/tmp/.oracle/
total 0
srwxrwxrwx 1 oracle oinstall 0 Dec 22 10:08 s#4353.1
srwxrwxrwx 1 oracle oinstall 0 Dec 22 10:08 s#4353.2
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sAnode1_crs_evm
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sCnode1_crs_evm
srwxrwxrwx 1 root root 0 Dec 24 16:55 sCRSD_UI_SOCKET
srwxrwxrwx 1 root root 0 Dec 24 16:55 snode1DBG_CRSD
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 snode1DBG_CSSD
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 snode1DBG_EVMD
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sOCSSD_LL_node1_
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sOCSSD_LL_node1_crs
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sOracle_CSS_LclLstnr_crs_1
srwxrwxrwx 1 root root 0 Dec 24 16:55 sora_crsqs
srwxrwxrwx 1 root root 0 Dec 24 16:55 sprocr_local_conn_0_PROC
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sSYSTEM.evm.acceptor.auth
[root@node1 10gbackup]# rm -rf /tmp/.oracle/
[root@node1 10gbackup]# rm -rf /var/tmp/.oracle/
[root@node1 10gbackup]#
[root@node1 10gbackup]#
[root@node1 10gbackup]# ll /tmp/.oracle/
ls: /tmp/.oracle/: No such file or directory
[root@node1 10gbackup]# ll /var/tmp/.oracle/
ls: /var/tmp/.oracle/: No such file or directory
[root@node1 10gbackup]#
Node2:
[root@node2 10gbackup]# ll /tmp/.oracle/
total 0
[root@node2 10gbackup]# ll /var/tmp/.oracle/
total 64
srwxrwxrwx 1 oracle oinstall 0 Dec 22 10:08 s#15791.1
srwxrwxrwx 1 oracle oinstall 0 Dec 22 10:08 s#15791.2
srwxrwxrwx 1 oracle oinstall 0 Dec 23 14:37 s#5706.1
srwxrwxrwx 1 oracle oinstall 0 Dec 23 14:37 s#5706.2
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sAnode2_crs_evm
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sCnode2_crs_evm
srwxrwxrwx 1 root root 0 Dec 24 16:55 sCRSD_UI_SOCKET
srwxrwxrwx 1 root root 0 Dec 24 16:55 snode2DBG_CRSD
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 snode2DBG_CSSD
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 snode2DBG_EVMD
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sOCSSD_LL_node2_
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sOCSSD_LL_node2_crs
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sOracle_CSS_LclLstnr_crs_2
srwxrwxrwx 1 root root 0 Dec 24 16:55 sora_crsqs
srwxrwxrwx 1 root root 0 Dec 24 16:55 sprocr_local_conn_0_PROC
srwxrwxrwx 1 oracle oinstall 0 Dec 24 16:55 sSYSTEM.evm.acceptor.auth
[root@node2 10gbackup]# rm -rf /tmp/.oracle/
[root@node2 10gbackup]# rm -rf /var/tmp/.oracle/
[root@node2 10gbackup]# ll /tmp/.oracle
ls: /tmp/.oracle: No such file or directory
[root@node2 10gbackup]# ll /var/tmp/.oracle
ls: /var/tmp/.oracle: No such file or directory
[root@node2 10gbackup]#