Oracle Linux安装Oracle 11G R2 ASM+RAC

Oracle Linux安装Oracle 11G R2 ASM+RAC


项目规划:

业务环境:HIS系统

物理机的配置:双CPU,28核/颗,256G,447G SSD硬盘。

主机名:his101/his102

IP地址:192.168.1.201-205

OS版本: OracleLinux-R7-U6-Server-x86_64

数据库版本:Oracle版本11g(Oracle 11.2.0.4.0)+psu补丁

db name:hisdb

instance name:hisdb1/2


集群IP规划:至少7个(5个公网,2个私网)
192.168.1.201 his101 #主机名不能超过8位
192.168.1.102 his102 #主机名不能超过8位

10.10.10.101 his101prv
10.10.10.102 his102prv

192.168.1.103 his101vip
192.168.1.104 his102vip

192.168.1.105 hisdbscan


Oracle Linux操作系统分区:

分区类型分区名称分区大小备注
标准分区/100G系统根目录
标准分区swap128G交换分区
标准分区/oracle100G数据库软件安装目录
标准分区/boot1G有点大了,实际使用220M,建议给500M,不够再扩.
/home0G和根目录在一起
/tmp0G和根目录在一起
/var0G和根目录在一起
/usr0G和根目录在一起
剩余:118G留有机动空间

生产数据空间规划:

系统类别用途设备名数量大小总大小备注
Oracle 11GR2数据库asmcrs/dev/sdd-f31G3G3个磁盘,每个1G
Oracle 11GR2数据库asmdgdata/dev/sdg-k5200G1000G用途:数据文件、系统表空间、回滚表空间、控制文件、在线重做日志文件、参数文件spfile,临时表
Oracle 11GR2数据库asmdgarch/dev/sdl-p5200G1000G用途:系统表空间,临时表空间,UNDO,参数文件,控制文件,重做日志文件等
Oracle 11GR2数据库lvbackup/dev/sdq-s31000G3000G3个磁盘,每个1000G,用途:oracle的备份

操作系统的安装:
第一次安装裸金属的操作系统,竟然不知道从什么地方下手。使用ULTRAISO将OracleLinux-R7-U6-Server-x86_64.iso文件写入U盘,设置启动顺序从USB启动,启动方式选择传统BIOS方式(LEGACY),不要选择UEFI方式。(安装方式好像有三种,这种是最接近平时我们安装操作系统的方式,其他安装方式请参考http://www.h3c.com/cn/Service/Document_Software/Document_Center/Home/Server/00-Public/Software_Installation/Installation_Manual/H3C_Server_CZXT_IG-6W104/)。


第一部分:ASM磁盘组的创建

1.1配置虚拟存储(生产环境由存储工程师配置)


2.2配置服务器通过iscsi连接存储(生产环境分为IP SAN和FC SAN网络,本次环境为IP SAN)

挂载ISO
mount /iso/OracleLinux-R7-U6-Server-x86_64-dvd.iso /oralinux 
安装iscsi软件
yum install iscsi-initiator-utils -y 
rpm -ivh /mnt/Packages/iscsi-initiator-utils-6.2.0.874-10.0.1.el7.x86_64.rpm
cat /etc/iscsi/initiatorname.iscsi

echo "options=--whitelisted --replace-whitespace" > /etc/scsi_id.config
systemctl enable iscsi
systemctl start iscsi
iscsiadm -m discovery -t st -p 10.132.145.51
iscsiadm -m discovery -t st -p 10.132.145.52
iscsiadm -m discovery -t st -p 10.132.145.53
iscsiadm -m discovery -t st -p 10.132.145.54
iscsiadm -m node -T iqn.2000-05.com.3pardata:20210002ac0255af -p 10.132.145.51:3260 -l
iscsiadm -m node -T iqn.2000-05.com.3pardata:20220002ac0255af -p 10.132.145.52:3260 -l
iscsiadm -m node -T iqn.2000-05.com.3pardata:21210002ac0255af -p 10.132.145.53:3260 -l
iscsiadm -m node -T iqn.2000-05.com.3pardata:21220002ac0255af -p 10.132.145.54:3260 -l

iscsiadm -m session -P 3
cd /var/lib/iscsi/nodes/
ls -1R 

可以看到已经建立的4条链路

/var/lib/iscsi/nodes/:
iqn.2000-05.com.3pardata:20210002ac0255af
iqn.2000-05.com.3pardata:20220002ac0255af
iqn.2000-05.com.3pardata:21210002ac0255af
iqn.2000-05.com.3pardata:21220002ac0255af


/var/lib/iscsi/nodes/iqn.2000-05.com.3pardata:20210002ac0255af:10.132.145.51,3260,21

/var/lib/iscsi/nodes/iqn.2000-05.com.3pardata:20210002ac0255af/10.132.145.51,3260,21:default

/var/lib/iscsi/nodes/iqn.2000-05.com.3pardata:20220002ac0255af:10.132.145.52,3260,22

/var/lib/iscsi/nodes/iqn.2000-05.com.3pardata:20220002ac0255af/10.132.145.52,3260,22:default

/var/lib/iscsi/nodes/iqn.2000-05.com.3pardata:21210002ac0255af:10.132.145.53,3260,121

/var/lib/iscsi/nodes/iqn.2000-05.com.3pardata:21210002ac0255af/10.132.145.53,3260,121:default

/var/lib/iscsi/nodes/iqn.2000-05.com.3pardata:21220002ac0255af:10.132.145.54,3260,122

/var/lib/iscsi/nodes/iqn.2000-05.com.3pardata:21220002ac0255af/10.132.145.54,3260,122:default

disk -l |grep "Disk /dev/"
lsblk |wc -l

tail -200f /var/log/messages

1.3配置多路径及ASM磁盘(生产环境)

查看是否安装多路径软件
rpm -qa|grep multipath 

device-mapper-multipath-libs-0.4.9-123.el7.x86_64
device-mapper-multipath-0.4.9-123.el7.x86_64

添加到内核中
modprobe dm-multipath         
modprobe dm-round-robin
lsmod |grep dm_multipath

cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/multipath.conf
systemctl enable multipathd
systemctl start multipathd

格式化多路径
multipath -v2  

multipath -ll |grep mpath

过滤一下查看多路径有多少条

multipath -ll |grep policy |wc -l  

如果出不来有多少,用如下方法:

for i in `cat /proc/partitions | awk '{print$4}' |grep sd | grep [a-z]$`; do echo "### $i: `/lib/udev/scsi_id --whitelisted --device=/dev/$i`"; done
cat /proc/partitions |grep sdc
ls -l /dev/disk/by-id 
iscsiadm -m session -P 3  |grep "Attached"

            Attached scsi disk sdw        State: running
            Attached scsi disk sdaa        State: running
            Attached scsi disk sdbg        State: running
            Attached scsi disk sdbi        State: running
            Attached scsi disk sdbj        State: running
            Attached scsi disk sdbk        State: running
            Attached scsi disk sdbl        State: running
            Attached scsi disk sdbm        State: running
            Attached scsi disk sdae        State: running
            Attached scsi disk sdai        State: running
            Attached scsi disk sdam        State: running
            Attached scsi disk sdaq        State: running
            Attached scsi disk sdat        State: running
            Attached scsi disk sdax        State: running
            Attached scsi disk sdbb        State: running
            Attached scsi disk sdbe        State: running


fdisk -l /dev/sdw  |grep "Disk /dev/sd"
fdisk -l /dev/sdaa |grep "Disk /dev/sd"
fdisk -l /dev/sdbg |grep "Disk /dev/sd"
fdisk -l /dev/sdbi |grep "Disk /dev/sd"
fdisk -l /dev/sdbj |grep "Disk /dev/sd"
fdisk -l /dev/sdbk |grep "Disk /dev/sd"
fdisk -l /dev/sdbl |grep "Disk /dev/sd"
fdisk -l /dev/sdbm |grep "Disk /dev/sd"
fdisk -l /dev/sdae |grep "Disk /dev/sd"
fdisk -l /dev/sdai |grep "Disk /dev/sd"
fdisk -l /dev/sdam |grep "Disk /dev/sd"
fdisk -l /dev/sdaq |grep "Disk /dev/sd"
fdisk -l /dev/sdat |grep "Disk /dev/sd"
fdisk -l /dev/sdax |grep "Disk /dev/sd"
fdisk -l /dev/sdbb |grep "Disk /dev/sd"
fdisk -l /dev/sdbe |grep "Disk /dev/sd"


for i in w aa bg bi bj bk bl bm ae ai am aq at ax bb be; 
do 
echo "sd$i" "`/usr/lib/udev/scsi_id  --whitelisted --replace-whitespace --device=/dev/sd$i` "; 
done
查出来的UUID如下:
[root@his102 dump]# for i in w aa bg bi bj bk bl bm ae ai am aq at ax bb be; 
> do 
> echo "sd$i" "`/usr/lib/udev/scsi_id  --whitelisted --replace-whitespace --device=/dev/sd$i` "; 
> done
sdw 360002ac00000000000000034000255af 
sdaa 360002ac00000000000000035000255af 
sdbg 360002ac0000000000000002e000255af 
sdbi 360002ac0000000000000002f000255af 
sdbj 360002ac00000000000000030000255af 
sdbk 360002ac00000000000000031000255af 
sdbl 360002ac00000000000000032000255af 
sdbm 360002ac00000000000000033000255af 
sdae 360002ac00000000000000036000255af 
sdai 360002ac00000000000000037000255af 
sdam 360002ac00000000000000038000255af 
sdaq 360002ac00000000000000039000255af 
sdat 360002ac0000000000000003a000255af 
sdax 360002ac0000000000000003b000255af 
sdbb 360002ac0000000000000002c000255af 
sdbe 360002ac0000000000000002d000255af 

cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/

more /etc/multipath.conf
取出安装系统的UUID:
for i in `cat /proc/partitions | awk '{print$4}' |grep sd | grep [a-z]$`; do echo "### $i: `/lib/udev/scsi_id --whitelisted --device=/dev/$i`"; done
his101:
sda: 3600508b1001c2a3075e4629a4ce944ca

his102:
sda: 3600508b1001c5de67478b76b073933a6


在配置文件里面将安装系统的UUID加入黑名单不进行多路径:
配置文件如下:
defaults {
#    polling_interval     10
    path_selector        "round-robin 0"
    path_grouping_policy    group_by_prio    
#    uid_attribute        ID_SERIAL
#    prio            alua
#    path_checker        readsector0
    rr_min_io        100
    max_fds            8192
    rr_weight        uniform
    failback        immediate
#    no_path_retry        fail
    user_friendly_names    yes
    find_multipaths         yes
}


blacklist {
       wwid 3600508b1001c2a3075e4629a4ce944ca
       wwid 3600508b1001c5de67478b76b073933a6 
#    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
#    devnode "^hd[a-z]"
}
multipaths {
        multipath {
                wwid                    360002ac0000000000000002c000255af    
                alias                    asm-dggrid1
        }
        multipath {
                wwid                    360002ac0000000000000002d000255af
                alias                   asm-dggrid2
        }
        multipath {
                wwid                    360002ac0000000000000002e000255af
                alias                   asm-dggrid3
        }
        multipath {
                wwid                    360002ac0000000000000002f000255af 
                alias                   asm-dgdata1
        }
        multipath {
                wwid                    360002ac00000000000000030000255af
                alias                   asm-dgdata2
        }
        multipath {
                wwid                    360002ac00000000000000031000255af
                alias                   asm-dgdata3
        }
        multipath {
                wwid                    360002ac00000000000000032000255af
                alias                   asm-dgdata4
        }
        multipath {
                wwid                    360002ac00000000000000033000255af
                alias                   asm-dgdata5
        }
        multipath {
                wwid                    360002ac00000000000000034000255af
                alias                   asm-dgarch1
        }
        multipath {
                wwid                    360002ac00000000000000035000255af
                alias                   asm-dgarch2
        }
        multipath {
                wwid                    360002ac00000000000000036000255af
                alias                   asm-dgarch3
        }
        multipath {
                wwid                    360002ac00000000000000037000255af
                alias                   asm-dgarch4
        }
        multipath {
                wwid                    360002ac00000000000000038000255af
                alias                   asm-dgarch5
        }
        multipath {
                wwid                    360002ac00000000000000039000255af
                alias                   fs-backup1
        }
        multipath {
                wwid                    360002ac0000000000000003a000255af
                alias                   fs-backup2
        }
        multipath {
                wwid                    360002ac0000000000000003b000255af
                alias                   fs-backup3
        }

}

删除多路径

multipath -F 
systemctl stop multipathd 
systemctl start multipathd

格式化多路径

multipath -v2  

查看多路径

multipath -ll 

1.4配置多路径及备份磁盘(生产环境)
–ASM磁盘

dmsetup ls |grep asm


asm-dgarch2    (252:2)
asm-dgarch1    (252:0)
asm-dgdata5    (252:15)
asm-dgdata4    (252:14)
asm-dgdata3    (252:12)
asm-dgdata2    (252:11)
asm-dgdata1    (252:13)
asm-dggrid3    (252:10)
asm-dggrid2    (252:9)
asm-dgarch5    (252:5)
asm-dggrid1    (252:8)
asm-dgarch4    (252:3)

7.x官方推荐

for i in asm-dggrid1 asm-dggrid2 asm-dggrid3 asm-dgdata1 asm-dgdata2 asm-dgdata3 asm-dgdata4 asm-dgdata5 asm-dgarch1 asm-dgarch2 asm-dgarch3 asm-dgarch4 asm-dgarch5 fs-backup1 fs-backup2 fs-backup3; 
do
  printf "%s %s\n" "$i" "$(udevadm info --query=all --name=/dev/mapper/$i |grep -i dm_uuid)"; 
done

取得UUID:

asm-dggrid1 E: DM_UUID=mpath-360002ac0000000000000002c000255af
asm-dggrid2 E: DM_UUID=mpath-360002ac0000000000000002d000255af
asm-dggrid3 E: DM_UUID=mpath-360002ac0000000000000002e000255af
asm-dgdata1 E: DM_UUID=mpath-360002ac0000000000000002f000255af
asm-dgdata2 E: DM_UUID=mpath-360002ac00000000000000030000255af
asm-dgdata3 E: DM_UUID=mpath-360002ac00000000000000031000255af
asm-dgdata4 E: DM_UUID=mpath-360002ac00000000000000032000255af
asm-dgdata5 E: DM_UUID=mpath-360002ac00000000000000033000255af
asm-dgarch1 E: DM_UUID=mpath-360002ac00000000000000034000255af
asm-dgarch2 E: DM_UUID=mpath-360002ac00000000000000035000255af
asm-dgarch3 E: DM_UUID=mpath-360002ac00000000000000036000255af
asm-dgarch4 E: DM_UUID=mpath-360002ac00000000000000037000255af
asm-dgarch5 E: DM_UUID=mpath-360002ac00000000000000038000255af
fs-backup1 E: DM_UUID=mpath-360002ac00000000000000039000255af
fs-backup2 E: DM_UUID=mpath-360002ac0000000000000003a000255af
fs-backup3 E: DM_UUID=mpath-360002ac0000000000000003b000255af

编辑如下文件:

vi /etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac0000000000000002c000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac0000000000000002d000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac0000000000000002e000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac0000000000000002f000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac00000000000000030000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac00000000000000031000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac00000000000000032000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac00000000000000033000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac00000000000000034000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac00000000000000035000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac00000000000000036000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac00000000000000037000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-360002ac00000000000000038000255af",OWNER="grid",GROUP="asmadmin",MODE="0660"

提前增加相应的用户和用户组:

/usr/sbin/groupadd -g 5001 oinstall
/usr/sbin/groupadd -g 5002 dba
/usr/sbin/groupadd -g 5003 oper
/usr/sbin/groupadd -g 5004 asmadmin
/usr/sbin/groupadd -g 5005 asmoper
/usr/sbin/groupadd -g 5006 asmdba
/usr/sbin/useradd -u 6001 -g oinstall -G dba,asmdba,oper oracle
/usr/sbin/useradd -u 6002 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid


/sbin/udevadm control --reload-rules
/sbin/udevadm trigger --type=devices --action=change

ls -lsa /dev/dm*

备份用的3个磁盘创建多路径(以101为例):
用命令扫出磁盘

iscsiadm -m session -P 3  |grep "Attached"  


fs-backup1 E: DM_UUID=mpath-360002ac00000000000000039000255af
fs-backup2 E: DM_UUID=mpath-360002ac0000000000000003a000255af
fs-backup3 E: DM_UUID=mpath-360002ac0000000000000003b000255af

ls -lsa /dev/mapper/fs*  

pvcreate /dev/mapper/mpatha /dev/mapper/mpathb /dev/mapper/mpathc
  Physical volume "/dev/mapper/mpatha" successfully created.
  Physical volume "/dev/mapper/mpathb" successfully created.
  Physical volume "/dev/mapper/mpathc" successfully created.

vgcreate backupvg /dev/mapper/mpatha /dev/mapper/mpathb /dev/mapper/mpathc

Volume group "backupvg" successfully created
lvcreate -n backuplv -L 1000G backupvg
Logical volume "backuplv" created.
mkfs.xfs /dev/backupvg/backuplv
mkdir /backup
mount /dev/backupvg/backuplv /backup
df -h
vi /etc/fstab
#/dev/backupvg/backuplv /backup xfs   defaults 0 0
[root@his101 /]# cp -r /etc/lvm /etc/lvm-bak
[root@his101 /]# umount /backup
[root@his101 /]# vgchange -an backupvg 去激活
  0 logical volume(s) in volume group "backupvg" now active
[root@his101 /]# vgexport backupvg
  Volume group "backupvg" successfully exported  

导出后再再第二台主机上导入
可以利用这个方法进行TB级数据库的迁移
一些常用的命令

Lvscan
Vgscan
pvscan
vgimport backupvg
vgchange -ay backupvg 激活
vgs
Lvs
Pvs
Mkdir /backup
Mount /dev/backupvg/backuplv /backup

umount /backup
vgchange -an backupvg
vgimport backupvg


vgimport backupvg
vgchange -ay backupvg
mount /dev/backupvg/backuplv /backup

遇到提示警告和Pv找不到的问题,用命令 pvscan –cache解决 搜寻目前系统里面任何具有 PV 的磁盘并写入到LVM缓存。


第二部分:linux操作系统部分:

2.1hosts文件配置

vi /etc/hosts
#public ip
192.168.1.201 his101
192.168.1.202 his102

#priv ip
10.10.10.201  his101priv
10.10.10.202  his102priv

#vip ip
192.168.2.203 his101 vip
192.168.2.204 his102vip

#scan ip
192.168.2.205 hisscan

2.2配置语言环境

echo "export LANG=en_US" >>  ~/.bash_profile
cat ~/.bash_profile

2.3创建用户、组、目录(前面已经创建了就不需要创建了)

/usr/sbin/groupadd -g 5001 oinstall
/usr/sbin/groupadd -g 5002 dba
/usr/sbin/groupadd -g 5003 oper
/usr/sbin/groupadd -g 5004 asmadmin
/usr/sbin/groupadd -g 5005 asmoper
/usr/sbin/groupadd -g 5006 asmdba
/usr/sbin/useradd -u 6001 -g oinstall -G dba,asmdba,oper oracle
/usr/sbin/useradd -u 6002 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid

修改grid、oracle用户的密码

passwd grid
passwd oracle

创建oracle程序安装目录

mkdir -p /oracle/app/grid
mkdir -p /oracle/app/11.2.0/grid
chown -R grid:oinstall /oracle
mkdir -p /oracle/app/oraInventory
chown -R grid:oinstall /oracle/app/oraInventory
mkdir -p /oracle/app/oracle
chown -R oracle:oinstall /oracle/app/oracle
chmod -R 775 /oracle

2.4配置YUM软件安装环境及软件包安装

cd /etc/yum.repos.d
mkdir bk
mv *.repo bk/
mkdir /iso

将ISO文件拷贝到/iso下

mkdir /oralinux
mount /iso/OracleLinux-R7-U6-Server-x86_64-dvd.iso /oralinux

echo "[EL]" >> /etc/yum.repos.d/oralinux.repo 
echo "name =Linux 7.x DVD" >> /etc/yum.repos.d/oralinux.repo 
echo "baseurl=file:///oralinux" >> /etc/yum.repos.d/oralinux.repo 
echo "gpgcheck=0" >> /etc/yum.repos.d/oralinux.repo 
echo "enabled=1" >> /etc/yum.repos.d/oralinux.repo 

cat /etc/yum.repos.d/oralinux.repo 


yum clean all
yum repolist

安装需要的程序包和依赖:

# From Public Yum or ULN
yum -y install autoconf 
yum -y install automake 
yum -y install binutils
yum -y install binutils-devel 
yum -y install bison 
yum -y install cpp 
yum -y install dos2unix 
yum -y install ftp 
yum -y install gcc 
yum -y install gcc-c++ 
yum -y install lrzsz 
yum -y install python-devel 
yum -y install compat-db* 
yum -y install compat-gcc-34 
yum -y install compat-gcc-34-c++ 
yum -y install compat-libcap1
yum -y install compat-libstdc++-33 
yum -y install compat-libstdc++-33.i686
yum -y install glibc-* 
yum -y install glibc-*.i686 
yum -y install libXpm-*.i686 
yum -y install libXp.so.6 
yum -y install libXt.so.6 
yum -y install libXtst.so.6 
yum -y install libXext
yum -y install libXext.i686
yum -y install libXtst 
yum -y install libXtst.i686
yum -y install libX11
yum -y install libX11.i686
yum -y install libXau
yum -y install libXau.i686
yum -y install libxcb
yum -y install libxcb.i686
yum -y install libXi
yum -y install libXi.i686
yum -y install libXtst
yum -y install libstdc++-docs
yum -y install libgcc_s.so.1
yum -y install libstdc++.i686
yum -y install libstdc++-devel
yum -y install libstdc++-devel.i686
yum -y install libaio
yum -y install libaio.i686
yum -y install libaio-devel
yum -y install libaio-devel.i686
yum -y install ksh 
yum -y install libXp 
yum -y install libaio-devel 
yum -y install numactl 
yum -y install numactl-devel 
yum -y install make -y
yum -y install sysstat -y
yum -y install unixODBC 
yum -y install unixODBC-devel 
yum -y install elfutils-libelf-devel-0.97 
yum -y install elfutils-libelf-devel
yum -y install redhat-lsb-core
yum -y install unzip

2.5修改系统相关参数

1)修改资源限制参数

vi /etc/security/limits.conf
#ORACLE SETTING
grid                 soft    nproc   16384 
grid                 hard    nproc   16384
grid                 soft    nofile  65536
grid                 hard    nofile  65536
grid                 soft    stack   32768
grid                 hard    stack   32768
oracle               soft    nproc   16384
oracle               hard    nproc   16384
oracle               soft    nofile  65536
oracle               hard    nofile  65536
oracle               soft    stack   32768
oracle               hard    stack   32768
oracle               hard    memlock 211608995  
oracle               soft    memlock 211608995

–211608995服务器配置了256GB=264511244KB的内存,264511244*0.8=211608995 物理内存的80%

su - grid
ulimit -a

memlock -单位是KB,要低于物理内存,如果是32G可以设置20G,物理内存是4G,SGA为3G.

2)修改NPROC参数

cat /etc/security/limits.d/20-nproc.conf
echo "* - nproc 16384" > /etc/security/limits.d/20-nproc.conf
cat /etc/security/limits.d/20-nproc.conf

(每个用户的进程数,还是需要改的,每个都不一样)

3)控制给用户分配的资源

echo "session    required     pam_limits.so" >> /etc/pam.d/login
cat /etc/pam.d/login

4)修改内核参数

vi /etc/sysctl.conf
#ORACLE SETTING
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
kernel.panic_on_oops = 1
vm.nr_hugepages = 77824  
kernel.shmmax = 198642237440
kernel.shmall = 489266595 
kernel.shmmni = 4096
sysctl -p

4个参数的含义:
kernel.shmmax = 190448096 (186G)190448096/1024=185984
定义单个共享内存段的最大值,要能放下整个数据库SGA内存大小
shmmax>sga_max_size<物理内存(不建议超过物理内存的80%)
shmmax<=menlock
kernel.shmall = 629146 198642237440/4096=489266595
控制共享内存页数
shmmax/page_size
get PAGESIZE
4096
kernel.shmmni 共享内存段的数量
vm.nr_hugepages = 1000 大内存页,超过8G的物理内存必开。
单位是2M,可以设置如下:
nr_hugepages=<memlock
nr_hugrpages<sga
nr_hugrpages<物理内存*80%
经验值
sga_max_size/2M+(100-500)
800+200=1000
cat /proc/meminfo 查看大内存页
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0

vm.nr_hugepages>=SGA_Target/Hugepagesize(2M)
如:SGA=150G, vm.nr_hugepages=(150+2)*1024/2=77824

5)关闭透明页(6.x/7.x)

cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never 

这个是打开的

cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never 

vi /etc/rc.d/rc.local

if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi

chmod +x /etc/rc.d/rc.local

6)关闭numa功能

numactl --hardware

vi /etc/default/grub
GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet numa=off"
grub2-mkconfig -o /etc/grub2.cfg

numactl --hardware

7)改图形界面

systemctl set-default multi-user.target

8)共享内存段
可视情况进行修改,修改完了进行重新挂载一下

echo "none     /dev/shm       tmpfs   defaults,size=190000m        0 0" >>/etc/fstab


mount -o remount /dev/shm

9)修改时区

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
hwclock

2.6配置安全

1)禁用SELINUX

echo "SELINUX=disabled" > /etc/selinux/config
echo "#SELINUXTYPE=targeted " >> /etc/selinux/config
setenforce 0

2)关闭防火墙

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service

重启OS

2.7配置NOZEROCONF

echo "NOZEROCONF=yes" >> /etc/sysconfig/network

2.8修改NSSWITCH.CONF

vi /etc/nsswitch.conf
修改行
hosts:      files dns myhostname
为
hosts:      files dns myhostname nis

2.9avhi-daemon进程关闭

systemctl stop avahi-daemon.socket avahi-daemon.service
systemctl disable avahi-daemon.socket avahi-daemon.service

2.10两台机的时间要一致

systemctl stop ntpd
systemctl disabled ntpd
systemctl status ntpd

date -s "Fri Mar 27 01:37:33 CST 2020"

2.11配置grid/oracle环境变量

第一台:

su - grid
vi ~/.bash_profile

PS1="[`whoami`@`hostname`:"'$PWD]$'
export PS1
umask 022
#alias sqlplus="rlwrap sqlplus"
export TMP=/tmp
export LANG=en_US
export TMPDIR=$TMP
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
ORACLE_BASE=/oracle/app/grid; export ORACLE_BASE
ORACLE_HOME=/oracle/app/11.2.0/grid; export ORACLE_HOME
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
PATH=.:$PATH:$HOME/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
            ulimit -p 16384
              ulimit -n 65536
  else
   ulimit -u 16384 -n 65536
      fi
    umask 022
fi
su - oracle
oracle用户的环境变量如下:
vim ~/.bash_profile
PS1="[`whoami`@`hostname`:"'$PWD]$'
#alias sqlplus="rlwrap sqlplus"
#alias rman="rlwrap rman"
export PS1
export TMP=/tmp
export LANG=en_US
export TMPDIR=$TMP
export ORACLE_UNQNAME=hisdb
ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=hisdb1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;export NLS_LANG
PATH=.:$PATH:$HOME/bin:$ORACLE_BASE/product/11.2.0/db_1/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
            ulimit -p 16384
              ulimit -n 65536
  else
   ulimit -u 16384 -n 65536
      fi
    umask 022
fi

第二台:

su - grid

PS1="[`whoami`@`hostname`:"'$PWD]$'
export PS1
umask 022
#alias sqlplus="rlwrap sqlplus"
export TMP=/tmp
export LANG=en_US
export TMPDIR=$TMP
ORACLE_SID=+ASM2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
ORACLE_BASE=/oracle/app/grid; export ORACLE_BASE
ORACLE_HOME=/oracle/app/11.2.0/grid; export ORACLE_HOME
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
PATH=.:$PATH:$HOME/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
            ulimit -p 16384
              ulimit -n 65536
  else
   ulimit -u 16384 -n 65536
      fi
    umask 022
fi

su - oracle
oracle用户的环境变量如下:

PS1="[`whoami`@`hostname`:"'$PWD]$'
#alias sqlplus="rlwrap sqlplus"
#alias rman="rlwrap rman"
export PS1
export TMP=/tmp
export LANG=en_US
export TMPDIR=$TMP
export ORACLE_UNQNAME=hisdb
ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=hisdb2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;export NLS_LANG
PATH=.:$PATH:$HOME/bin:$ORACLE_BASE/product/11.2.0/db_1/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
            ulimit -p 16384
              ulimit -n 65536
  else
   ulimit -u 16384 -n 65536
      fi
    umask 022
fi

2.12配置SSH信任关系

解压缩ssh-scripts,然后赋权:chmod 777 *

./sshUserSetup.sh -user grid  -hosts "his101 his102" -advanced -exverify -confirm
./sshUserSetup.sh -user oracle  -hosts "his101 his102" -advanced -exverify -confirm

第三部分:RAC的安装

RAC的安装:

chown -R grid:oinstall /backup
chmod -R 777 /backup
su grid
cd /backup
ls
unzip p13390677_112040_Linux-x86-64_3of7.zip
unzip p19404309_112040_Linux-x86-64.zip
cp b19404309/grid/cvu_prereq.xml grid/stage/cvu/

exit
cd /backup/grid/rpm
rpm -ivh cvuqdisk-1.0.9-1.rpm

将这个包传到另一个主机进行安装

scp cvuqdisk-1.0.9-1.rpm his102:/tmp
rpm -ivh cvuqdisk-1.0.9-1.rpm

利用VNC安装
挂载光驱安装vnc

yum install *vnc* -y
su - grid

运行vnc服务

vncserver 

通过VNC登进去
进入终端

./runInstaller -jreLoc /etc/alternatives/jre_1.8.0

对出现的问题进行修复

安装到76%的时候执行第一个脚本。
再执行第二个脚本之前,须打p18370031这个补丁

su - grid 
cd /oracle/app/11.2.0
unzip p18370031_112040_Linux-x86-64.zip
cd 18370031/
/oracle/app/11.2.0/grid/OPatch/opatch apply

同时第二台补丁也一起同时打了

/backup/18370031]$
/oracle/app/11.2.0/grid/OPatch/opatch lsinventory

查看补丁是否已经正常打上

/oracle/app/oraInventory/orainstRoot.sh

/oracle/app/11.2.0/grid/root.sh

安装不下去的问题:

ASM failed to start. Check /oracle/app/grid/cfgtoollogs/asmca/asmca-200317AM080920.log for details.

Configuration of ASM ... failed
see asmca logs at /oracle/app/grid/cfgtoollogs/asmca for details
Did not succssfully configure and start ASM at /oracle/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6912.
/oracle/app/11.2.0/grid/perl/bin/perl -I/oracle/app/11.2.0/grid/perl/lib -I/oracle/app/11.2.0/grid/crs/install /oracle/app/11.2.0/grid/crs/install/rootcrs.pl execution failed

重新执行root.sh之前别忘了删除配置:

/oracle/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose
/oracle/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose

删除提示错误要执行:

yum install perl-Env

杀掉还存在的进程:

ps -ef| grep "/oracle/app/11.2.0/grid/"
kill -9 13774
kill -9 23162
kill -9 32429

再进行打补丁


搭建rac时,安装grid之前,检查环境,发现如图告警超时
问题解决办法:

1、修改DNS服务器的/etc/named.conf文件,添加fil “/dev/null”;信息即可。

zone "." IN {
 
      type hint;
 
//      file "named.ca";
        file "/dev/null";

2、在RAC节点主机分别添加如下参数:

[root@rac2 ~]# vi  /etc/resolv.conf
options rotate
options timeout:2
options attempts:5

或者:


```bash
[grid@racnode1 grid]$ mv /bin/nslookup /bin/nslookup.origin
[grid@racnode1 grid]$ vim /bin/nslookup
[grid@racnode1 grid]$ cat /bin/nslookup
#!/bin/bash
HOSTNAME=${1}
/bin/nslookup.origin $HOSTNAME
exit 0

ohasd需要被设置为一个服务,在运行脚本root.sh之前。

以root用户创建服务文件

touch /usr/lib/systemd/system/ohas.service
chmod 777 /usr/lib/systemd/system/ohas.service

将以下内容添加到新创建的ohas.service文件中

vi /usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target


[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always


[Install]
WantedBy=multi-user.target

以root用户运行下面的命令

systemctl daemon-reload
systemctl enable ohas.service
systemctl start ohas.service

验证

systemctl status ohas.service

CTRL+C停止运行root.sh脚本后再进行该操作,然后再运行root.sh脚本

一般来讲,可能是11g中的一个bug。ohasd无法启动。
我们看看进程情况,然后跟踪下进程停在什么地方。

[root@racnode2 bin]# ps -ef|grep oha
root      19899      1  0 12:39 ?        00:00:00 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
root      20035  19711  0 12:41 pts/1    00:00:00 grep --color=auto oha
[root@racnode2 bin]# strace -p 19899 -o hem.log
strace: Process 19899 attached

开启另一个窗口,杀掉该进程。然后看看结果。

[root@racnode2 bin]#
[root@racnode2 bin]# tail hem.log
stat("/u01/app/11.2.0/grid/log/racnode2/ohasd/ohasd.log", {st_mode=S_IFREG|0644, st_size=522008, ...}) = 0
access("/u01/app/11.2.0/grid/log/racnode2/ohasd/ohasd.log", F_OK) = 0
statfs("/u01/app/11.2.0/grid/log/racnode2/ohasd/ohasd.log", {f_type=0x58465342, f_bsize=4096, f_blocks=9692545, f_bfree=7352756, f_bavail=7352756, f_files=38789120, f_ffree=38630502, f_fsid={64512, 0}, f_namelen=255, f_frsize=4096, f_flags=ST_VALID|ST_RELATIME}) = 0
open("/u01/app/11.2.0/grid/log/racnode2/ohasd/ohasd.log", O_WRONLY|O_APPEND) = 6
stat("/u01/app/11.2.0/grid/log/racnode2/ohasd/ohasd.log", {st_mode=S_IFREG|0644, st_size=522008, ...}) = 0
stat("/u01/app/11.2.0/grid/log/racnode2/ohasd/ohasd.log", {st_mode=S_IFREG|0644, st_size=522008, ...}) = 0
futex(0x24f13e4, FUTEX_CMP_REQUEUE_PRIVATE, 1, 2147483647, 0x24f5f90, 6) = 1
write(1, "Timed out waiting for init.ohasd"..., 67) = 67
open("/var/tmp/.oracle/npohasd", O_WRONLY <unfinished ...>
+++ killed by SIGKILL +++

果然停止在/var/tmp/.oracle/npohasd这个地方。解决办法是在重新运行root.sh脚本同时,开启另一个窗口,运行下面命令。
这个有用

[root@racnode2 bin]# dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 1.70967 s, 0.0 kB/s

在grid用户下

[grid@his101 ~]$ asmca

创建磁盘组

export DISPLAY=192.168.2.101:1.0

切换到root用户,执行以下命令:

[root@localhost ~]# export DISPLAY=:0.0

[root@localhost ~]# xhost +

access control disabled, clients can connect from any host


vim ~/.bash_profile

将/oracle/app/11.2.0/grid/bin写入环境变量中(两个节点一致)

PATH=$PATH:/oracle/app/11.2.0/grid/bin:$HOME/bin

检测

ocrcheck 


ocrconfig -add +dggrid2

crs_stat -t

crsctl stat res -t

检测高可用服务

crsctl check crs

asmcmd命令

查看监听状态

lsnrctl status

安装到56%的时候出现的问题的解决方案:

cd $ORACLE_HOME/sysman/lib/
cp ins_emagent.mk ins_emagent.mk.bak
vi ins_emagent.mk

NMECTL --lnnz11

DBCA创建数据库

RAC数据库->创建数据库->自定义数据库->admin-managed(全选)->configure enterprise manager打勾->
enable automatic maintenance tasks打勾->输入用户密码->ASM->use omf(DGSYSTEM)->MRL and control files(DGSYSTEM,DGDATA01)
->输入asm密码->闪回和归档取消->默认可以全选->内存选定制->sizing(processes2000)->
选择字符集(zhs16gbk)->connection mode(专有模式)->游标改为2000->maximum(8192)->临时表空间都改为20G(自动扩展关闭,user为5G)
->重做日志最少5组每组500M->create database三个都勾选上->保存安装参数文件->开始计算脚本,安装数据库

时间在30-180分钟

可以用asmcmd lsdg 查看磁盘空间情况

asmcmd可以进去看看文件

[grid@his101:/home/grid]$asmcmd
ASMCMD> ls
DGDATA01/
DGGRID1/
DGGRID2/
DGRECOVERY/
DGSYSTEM/
ASMCMD> cd dgsystem
ASMCMD> ls
HISDB/
ASMCMD> cd hisdb
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ONLINELOG/
PARAMETERFILE/
TEMPFILE/
spfilehisdb.ora
ASMCMD> 

查看服务:

[grid@his101:/home/grid]$crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....TA01.dg ora....up.type ONLINE    ONLINE    his101      
ora.DGGRID1.dg ora....up.type ONLINE    ONLINE    his101      
ora.DGGRID2.dg ora....up.type ONLINE    ONLINE    his101      
ora....VERY.dg ora....up.type ONLINE    ONLINE    his101      
ora....STEM.dg ora....up.type ONLINE    ONLINE    his101      
ora....ER.lsnr ora....er.type ONLINE    ONLINE    his101      
ora....N1.lsnr ora....er.type ONLINE    ONLINE    his102      
ora.asm        ora.asm.type   ONLINE    ONLINE    his101      
ora.cvu        ora.cvu.type   ONLINE    ONLINE    his102      
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....SM1.asm application    ONLINE    ONLINE    his101      
ora....01.lsnr application    ONLINE    ONLINE    his101      
ora.his101.gsd application    OFFLINE   OFFLINE               
ora.his101.ons application    ONLINE    ONLINE    his101      
ora.his101.vip ora....t1.type ONLINE    ONLINE    his101      
ora....SM2.asm application    ONLINE    ONLINE    his102      
ora....02.lsnr application    ONLINE    ONLINE    his102      
ora.his102.gsd application    OFFLINE   OFFLINE               
ora.his102.ons application    ONLINE    ONLINE    his102      
ora.his102.vip ora....t1.type ONLINE    ONLINE    his102      
ora.hisdb.db   ora....se.type ONLINE    ONLINE    his101      
ora....network ora....rk.type ONLINE    ONLINE    his101      
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    his102      
ora.ons        ora.ons.type   ONLINE    ONLINE    his101      
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    his102      
[grid@his101:/home/grid]$
ora.hisdb.db   ora....se.type ONLINE    ONLINE    his101

数据库已经在线

[grid@his101:/home/grid]$crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DGDATA01.dg
               ONLINE  ONLINE       his101                                       
               ONLINE  ONLINE       his102                                       
ora.DGGRID1.dg
               ONLINE  ONLINE       his101                                       
               ONLINE  ONLINE       his102                                       
ora.DGGRID2.dg
               ONLINE  ONLINE       his101                                       
               ONLINE  ONLINE       his102                                       
ora.DGRECOVERY.dg
               ONLINE  ONLINE       his101                                       
               ONLINE  ONLINE       his102                                       
ora.DGSYSTEM.dg
               ONLINE  ONLINE       his101                                       
               ONLINE  ONLINE       his102                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       his101                                       
               ONLINE  ONLINE       his102                                       
ora.asm
               ONLINE  ONLINE       his101                   Started             
               ONLINE  ONLINE       his102                   Started             
ora.gsd
               OFFLINE OFFLINE      his101                                       
               OFFLINE OFFLINE      his102                                       
ora.net1.network
               ONLINE  ONLINE       his101                                       
               ONLINE  ONLINE       his102                                       
ora.ons
               ONLINE  ONLINE       his101                                       
               ONLINE  ONLINE       his102                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       his102                                       
ora.cvu
      1        ONLINE  ONLINE       his102                                       
ora.his101.vip
      1        ONLINE  ONLINE       his101                                       
ora.his102.vip
      1        ONLINE  ONLINE       his102                                       
ora.hisdb.db
      1        ONLINE  ONLINE       his101                   Open                
      2        ONLINE  ONLINE       his102                   Open                
ora.oc4j
      1        ONLINE  ONLINE       his102                                       
ora.scan1.vip
      1        ONLINE  ONLINE       his102                                       
[grid@his101:/home/grid]$

可以看的更清晰

ora.hisdb.db
      1        ONLINE  ONLINE       his101                   Open                
      2        ONLINE  ONLINE       his102                   Open

SGA和PGA的数值设定:
SGA+PGA<物理内存80%
SGA<=内存80%80%
PGA<=内存
80%20%
以4G内存为例:
1G的内存留给GRID:
3G
80%=2.4G
2.4G
80%=1920M
2.4G*20%=480M
一般为40-60%

查看监听状态:

[grid@his101:/home/grid]$lsnrctl status

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 22-MAR-2020 07:39:53

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                21-MAR-2020 06:44:35
Uptime                    1 days 0 hr. 55 min. 18 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /oracle/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /oracle/app/grid/diag/tnslsnr/his101/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.101)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.103)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "hisdb" has 1 instance(s).
  Instance "hisdb1", status READY, has 1 handler(s) for this service...
The command completed successfully

登录数据库:

[root@his101 backup]# su - oracle 
Last login: Sat Mar 21 07:27:22 CST 2020 on pts/0
[oracle@his101:/home/oracle]$sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Sun Mar 22 07:41:13 2020

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE
SQL> select instance_name,status from v$instance;

INSTANCE_NAME     STATUS
---------------- ------------
hisdb1         OPEN

两个主机都检查一遍,数据库就创建好了。


集群环境下配置归档与闪回

生产环境归档必开,闪回根据情况,生产环境不建议开。

开启归档模式步骤:
1)关闭另一个实例
2)设置恢复目录
3)设置数据库为非集群模式
4)关闭数据库启动到mount
5)打开归档
6)设置数据库为集群模式
7)重启数据库并打开
8)启动另一个实例
9)测试是否生成归档日志
10)如果需要开闪回,在这一步设置

关闭归档模式步骤:
1)关闭另一个实例
2)设置数据库为非集群模式
3)关闭数据库启动到mount
4)关闭归档
5)设置数据库为集群模式
6)重启数据库并打开
7)启动另一个实例
8)测试是否生成归档日志
9)如果需要开闪回,在这一步设置

操作:
1)

[root@his102 30070097]# su - oracle 

[oracle@his102:/home/oracle]$sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Mar 23 04:49:14 2020

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> 

在另一台主机查看是否开通了归档模式:

[root@his101 psu]# su - oracle 
Last login: Mon Mar 23 03:33:23 CST 2020 on pts/0
[oracle@his101:/home/oracle]$sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Mar 23 05:36:42 2020

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> archive log list;
Database log mode           No Archive Mode
Automatic archival           Disabled
Archive destination           /oracle/app/oracle/product/11.2.0/db_1/dbs/arch
Oldest online log sequence     8
Current log sequence           12

查看恢复目录和设置大小:

SQL> show parameter recovery

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest        string
db_recovery_file_dest_size       big integer 0
recovery_parallelism         integer   0
SQL> alter system set db_recovery_file_dest_size=2g;

System altered.
SQL> alter system set db_recovery_file_dest='+dgrecovery';

System altered.

SQL> show parameter archive

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
archive_lag_target         integer   0
log_archive_config         string
log_archive_dest           string
log_archive_dest_1         string
log_archive_dest_10        string
log_archive_dest_11        string
log_archive_dest_12        string
log_archive_dest_13        string
log_archive_dest_14        string
log_archive_dest_15        string
log_archive_dest_16        string

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_17        string
log_archive_dest_18        string
log_archive_dest_19        string
log_archive_dest_2         string
log_archive_dest_20        string
log_archive_dest_21        string
log_archive_dest_22        string
log_archive_dest_23        string
log_archive_dest_24        string
log_archive_dest_25        string
log_archive_dest_26        string

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_27        string
log_archive_dest_28        string
log_archive_dest_29        string
log_archive_dest_3         string
log_archive_dest_30        string
log_archive_dest_31        string
log_archive_dest_4         string
log_archive_dest_5         string
log_archive_dest_6         string
log_archive_dest_7         string
log_archive_dest_8         string

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_9             string
log_archive_dest_state_1       string  enable
log_archive_dest_state_10      string  enable
log_archive_dest_state_11      string  enable
log_archive_dest_state_12      string  enable
log_archive_dest_state_13      string  enable
log_archive_dest_state_14      string  enable
log_archive_dest_state_15      string  enable
log_archive_dest_state_16      string  enable
log_archive_dest_state_17      string  enable
log_archive_dest_state_18      string  enable

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_19      string  enable
log_archive_dest_state_2       string  enable
log_archive_dest_state_20      string  enable
log_archive_dest_state_21      string  enable
log_archive_dest_state_22      string  enable
log_archive_dest_state_23      string  enable
log_archive_dest_state_24      string  enable
log_archive_dest_state_25      string  enable
log_archive_dest_state_26      string  enable
log_archive_dest_state_27      string  enable
log_archive_dest_state_28      string  enable

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_29      string  enable
log_archive_dest_state_3       string  enable
log_archive_dest_state_30      string  enable
log_archive_dest_state_31      string  enable
log_archive_dest_state_4       string  enable
log_archive_dest_state_5       string  enable
log_archive_dest_state_6       string  enable
log_archive_dest_state_7       string  enable
log_archive_dest_state_8       string  enable
log_archive_dest_state_9       string  enable
log_archive_duplex_dest        string

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
log_archive_format         string  %t_%s_%r.dbf
log_archive_local_first        boolean   TRUE
log_archive_max_processes      integer   4
log_archive_min_succeed_dest       integer   1
log_archive_start        boolean   FALSE
log_archive_trace        integer   0
standby_archive_dest         string  ?/dbs/arch
SQL> show parameter cluster

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
cluster_database         boolean   TRUE
cluster_database_instances       integer   2
cluster_interconnects        string
SQL> alter system set cluster_database=false scope=spfile;

System altered.

SQL> shutdown immediate;            
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1603411968 bytes
Fixed Size        2253664 bytes
Variable Size     721423520 bytes
Database Buffers    872415232 bytes
Redo Buffers        7319552 bytes
Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> alter database flashback on;

Database altered.

SQL> archive log list;
Database log mode        Archive Mode
Automatic archival         Enabled
Archive destination        USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     8
Next log sequence to archive   12
Current log sequence         12
SQL> desc v$database;
 Name            Null?    Type
 ----------------------------------------- -------- ----------------------------
 DBID               NUMBER
 NAME               VARCHAR2(9)
 CREATED              DATE
 RESETLOGS_CHANGE#            NUMBER
 RESETLOGS_TIME             DATE
 PRIOR_RESETLOGS_CHANGE#          NUMBER
 PRIOR_RESETLOGS_TIME           DATE
 LOG_MODE             VARCHAR2(12)
 CHECKPOINT_CHANGE#           NUMBER
 ARCHIVE_CHANGE#            NUMBER
 CONTROLFILE_TYPE           VARCHAR2(7)
 CONTROLFILE_CREATED            DATE
 CONTROLFILE_SEQUENCE#            NUMBER
 CONTROLFILE_CHANGE#            NUMBER
 CONTROLFILE_TIME           DATE
 OPEN_RESETLOGS             VARCHAR2(11)
 VERSION_TIME             DATE
 OPEN_MODE              VARCHAR2(20)
 PROTECTION_MODE            VARCHAR2(20)
 PROTECTION_LEVEL           VARCHAR2(20)
 REMOTE_ARCHIVE             VARCHAR2(8)
 ACTIVATION#              NUMBER
 SWITCHOVER#              NUMBER
 DATABASE_ROLE              VARCHAR2(16)
 ARCHIVELOG_CHANGE#           NUMBER
 ARCHIVELOG_COMPRESSION           VARCHAR2(8)
 SWITCHOVER_STATUS            VARCHAR2(20)
 DATAGUARD_BROKER           VARCHAR2(8)
 GUARD_STATUS             VARCHAR2(7)
 SUPPLEMENTAL_LOG_DATA_MIN          VARCHAR2(8)
 SUPPLEMENTAL_LOG_DATA_PK         VARCHAR2(3)
 SUPPLEMENTAL_LOG_DATA_UI         VARCHAR2(3)
 FORCE_LOGGING              VARCHAR2(3)
 PLATFORM_ID              NUMBER
 PLATFORM_NAME              VARCHAR2(101)
 RECOVERY_TARGET_INCARNATION#         NUMBER
 LAST_OPEN_INCARNATION#           NUMBER
 CURRENT_SCN              NUMBER
 FLASHBACK_ON             VARCHAR2(18)
 SUPPLEMENTAL_LOG_DATA_FK         VARCHAR2(3)
 SUPPLEMENTAL_LOG_DATA_ALL          VARCHAR2(3)
 DB_UNIQUE_NAME             VARCHAR2(30)
 STANDBY_BECAME_PRIMARY_SCN         NUMBER
 FS_FAILOVER_STATUS           VARCHAR2(22)
 FS_FAILOVER_CURRENT_TARGET         VARCHAR2(30)
 FS_FAILOVER_THRESHOLD            NUMBER
 FS_FAILOVER_OBSERVER_PRESENT         VARCHAR2(7)
 FS_FAILOVER_OBSERVER_HOST          VARCHAR2(512)
 CONTROLFILE_CONVERTED            VARCHAR2(3)
 PRIMARY_DB_UNIQUE_NAME           VARCHAR2(30)
 SUPPLEMENTAL_LOG_DATA_PL         VARCHAR2(3)
 MIN_REQUIRED_CAPTURE_CHANGE#         NUMBER

SQL> select FLASHBACK_ON from v$database;

FLASHBACK_ON
------------------
YES

SQL> alter system set cluster_database=true scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1603411968 bytes
Fixed Size        2253664 bytes
Variable Size     721423520 bytes
Database Buffers    872415232 bytes
Redo Buffers        7319552 bytes
Database mounted.
Database opened.

回到第二台主机打开数据库并查看归档模式是否生效:

SQL> startup
ORACLE instance started.

Total System Global Area 1603411968 bytes
Fixed Size        2253664 bytes
Variable Size     754977952 bytes
Database Buffers    838860800 bytes
Redo Buffers        7319552 bytes
Database mounted.
Database opened.
SQL> archive log list;
Database log mode        Archive Mode
Automatic archival         Enabled
Archive destination        USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     1
Next log sequence to archive   4
Current log sequence         4

测试归档模式:
在第一台主机:

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> /

System altered.

在asmcmd里面可以查看到归档和闪回文件

关闭时先关闪回,在关归档。
步骤一样,先关第二台实例

[root@his102 ~]# su - oracle 
Last login: Mon Mar 23 03:34:16 CST 2020 on pts/0
[oracle@his102:/home/oracle]$sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Mar 23 06:28:52 2020

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

第一个实例:

SQL> show parameter cluster

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
cluster_database         boolean   TRUE
cluster_database_instances       integer   2
cluster_interconnects        string
SQL> alter system set cluster_database=false scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> starup mount;
SP2-0734: unknown command beginning "starup mou..." - rest of line ignored.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1603411968 bytes
Fixed Size        2253664 bytes
Variable Size     721423520 bytes
Database Buffers    872415232 bytes
Redo Buffers        7319552 bytes
Database mounted.
SQL> alter database flashback off;

Database altered.

SQL> alter database noarchivelog;

Database altered.

SQL> alter system set cluster_database=true scope=spfile;

System altered.

SQL> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1603411968 bytes
Fixed Size        2253664 bytes
Variable Size     721423520 bytes
Database Buffers    872415232 bytes
Redo Buffers        7319552 bytes
Database mounted.
Database opened.
SQL> archive log list;
Database log mode        No Archive Mode
Automatic archival         Disabled
Archive destination        USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     14
Current log sequence         18
启动第二台主机实例:
SQL> startup
ORACLE instance started.

Total System Global Area 1603411968 bytes
Fixed Size        2253664 bytes
Variable Size     754977952 bytes
Database Buffers    838860800 bytes
Redo Buffers        7319552 bytes
Database mounted.
Database opened.
SQL> archive log list;
Database log mode        No Archive Mode
Automatic archival         Disabled
Archive destination        USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     2
Current log sequence         6
SQL> 

负载均衡与故障切换配置测试:
测试环境:
client pl/sql dev

SQL> show parameter listener

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
listener_networks        string
local_listener           string   (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.104)(PORT=1521))
remote_listener          string  hisscan:1521
SQL> show parameter service

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
service_names          string  hisdb

RAC后期配置和监控管理

1、180天密码过期

select * from dba_profiles where profile='DEFAULT';
alter profile default limit PASSWORD_LIFE_TIME UNLIMITED;

2、关闭审计

alter system set audit_trail=none scope=spfile;

3、其他参数

4、日志文件

SQL> show parameter dump

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
background_core_dump         string  partial
background_dump_dest         string  /oracle/app/oracle/diag/rdbms/hisdb/hisdb1/trace
core_dump_dest               string  /oracle/app/oracle/diag/rdbms/hisdb/hisdb1/cdump
max_dump_file_size           string  unlimited
shadow_core_dump             string  partial
user_dump_dest               string  /oracle/app/oracle/diag/rdbms/hisdb/hisdb1/trace

asm、tnslsn的日志文件:

[root@his101 ~]# su - grid
Last login: Mon Mar 23 06:21:15 CST 2020 on pts/1
[grid@his101:/home/grid]$sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Mar 23 08:16:29 2020

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> conn /as sysasm
Connected.
SQL> show parameter dump

NAME             TYPE  VALUE
------------------------------------ ----------- ------------------------------
background_core_dump         string  partial
background_dump_dest         string  /oracle/app/grid/diag/asm/+asm/+ASM1/trace
core_dump_dest               string  /oracle/app/grid/diag/asm/+asm/+ASM1/cdump
max_dump_file_size           string  unlimited
shadow_core_dump             string  partial
user_dump_dest               string  /oracle/app/grid/diag/asm/+asm/+ASM1/trace

RAC的日志:

grid@his101:/oracle/app/11.2.0/grid/log

系统的日志:

grid@his101:/var/log

EM管理控制台:

[root@his101 ~]# su - oracle 
Last login: Mon Mar 23 05:53:04 CST 2020 on pts/0
[oracle@his101:/home/oracle]$emctl start dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.4.0 
Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
https://his101:1158/em/console/aboutApplication
Starting Oracle Enterprise Manager 11g Database Control ...... started. 
------------------------------------------------------------------
Logs are generated in directory /oracle/app/oracle/product/11.2.0/db_1/his101_hisdb/sysman/log 


[oracle@his101:/home/oracle]$emctl stop dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.4.0 
Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
https://his101:1158/em/console/aboutApplication
Stopping Oracle Enterprise Manager 11g Database Control ... 
 ...  Stopped. 

CRS开机不自动启动:

[root@his101 ~]# crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@his102 ~]# crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.

asm
如何使用dgdata01这个磁盘组
1、创建表空间

CREATE TABLESPACE tjdata01 DATAFILE '+dgdata01' SIZE 50M autoextend off
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
alter TABLESPACE tjdata01 add datafile '+dgdata01' SIZE 50M autoextend off;

2、创建用户

create user test identified by test default tablespace tjdata01;
grant dba to test;

3、创建表与插入数据

conn test/test;
create table table01(
id number,
name varchar2(20)
);
insert into table01 values(1,'test1');
insert into table01 values(2,'test2');
commit;
select * from table01;

RAC的启动和停止

su - oracle
sqlplus "/as sysdba"
shutdown immediate;
su - root
crsctl stop crs

shutdown -h 0 /reboot

如何开机启动RAC

su - root 
crsctl start crs
crsctl check crs
crsctl status res -t 
crs_stat -t
su -oracle
sqlplus "/as sysdba"
select open_mode from v$database;
select instance_name,status from v$instance;

再检查grid,db日志,是否有异常
启动后检查:

crsctl check crs
crsctl status res -t
crs_stat -t
asmcmd lsdg
ocrcheck

备份:

root:
cd /backup
tar zcvf oracle.tar /backup

两台机都要备份


crsctl 集群管理工具
srvctl 集群服务管理工具

[grid@his101:/home/grid]$crsctl query crs softwareversion his101
[grid@his101:/home/grid]$crsctl query crs softwareversion his102
[grid@his101:/home/grid]$crsctl query crs softwareversion -all
[grid@his101:/home/grid]$crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   2217bd1707f84f20bf835bf8f50e9a81 (/dev/mapper/asm-dggrid1) [DGGRID1]
 2. ONLINE   542de83ba7554fa0bf5cc0315b795f03 (/dev/mapper/asm-dggrid2) [DGGRID1]
 3. ONLINE   0cff0db45e344f8abf7832ab1aeba4da (/dev/mapper/asm-dggrid3) [DGGRID1]
Located 3 voting disk(s).
[grid@his101:/home/grid]$crsctl query crs administrator
CRS Administrator List: *
[grid@his101:/home/grid]$crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.4.0]

ocrconfig
ocrdump

[grid@his101:/home/grid]$ocrconfig -showbackup

his102     2020/03/24 07:06:58     /oracle/app/11.2.0/grid/cdata/his-cluster/backup00.ocr

his102     2020/03/24 03:06:58     /oracle/app/11.2.0/grid/cdata/his-cluster/backup01.ocr

his102     2020/03/23 23:06:57     /oracle/app/11.2.0/grid/cdata/his-cluster/backup02.ocr

his102     2020/03/23 07:06:53     /oracle/app/11.2.0/grid/cdata/his-cluster/day.ocr

his101     2020/03/19 12:12:28     /oracle/app/11.2.0/grid/cdata/his-cluster/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available
[root@his101 ~]# ocrconfig -manualbackup

his102     2020/03/24 09:09:52     /oracle/app/11.2.0/grid/cdata/his-cluster/backup_20200324_090952.ocr

srvctl -h |more 强大的命令

[grid@his101:/home/grid]$srvctl -h |more
Usage: srvctl [-V]
Usage: srvctl add database -d <db_unique_name> -o <oracle_home> [-c {RACONENODE | RAC | SINGLE} [-e <server_list>] [-i <inst_name>] [-w <timeout>]] [-m <domain_
name>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-n <db_name>] [-y {AUTOMAT
IC | MANUAL | NORESTART}] [-g "<serverpool_list>"] [-x <node_name>] [-a "<diskgroup_list>"] [-j "<acfs_path_list>"]
Usage: srvctl config database [-d <db_unique_name> [-a] ] [-v]
Usage: srvctl start database -d <db_unique_name> [-o <start_options>] [-n <node>]
Usage: srvctl stop database -d <db_unique_name> [-o <stop_options>] [-f]
Usage: srvctl status database -d <db_unique_name> [-f] [-v]
Usage: srvctl enable database -d <db_unique_name> [-n <node_name>]
Usage: srvctl disable database -d <db_unique_name> [-n <node_name>]
Usage: srvctl modify database -d <db_unique_name> [-n <db_name>] [-o <oracle_home>] [-u <oracle_user>] [-e <server_list>] [-w <timeout>] [-m <domain>] [-p <spfi
le>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-y {AUTOMATIC | MANUAL | NORESTART}] [-g "
<serverpool_list>" [-x <node_name>]] [-a "<diskgroup_list>"|-z] [-j "<acfs_path_list>"] [-f]
Usage: srvctl remove database -d <db_unique_name> [-f] [-y]
Usage: srvctl getenv database -d <db_unique_name> [-t "<name_list>"]
Usage: srvctl setenv database -d <db_unique_name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}
Usage: srvctl unsetenv database -d <db_unique_name> -t "<name_list>"
Usage: srvctl convert database -d <db_unique_name> -c RAC [-n <node>]
Usage: srvctl convert database -d <db_unique_name> -c RACONENODE [-i <inst_name>] [-w <timeout>]
Usage: srvctl relocate database -d <db_unique_name> {[-n <target>] [-w <timeout>] [-o <stop_option>] | -a [-r]} [-v]
Usage: srvctl upgrade database -d <db_unique_name> -o <oracle_home>
Usage: srvctl downgrade database -d <db_unique_name> -o <oracle_home> -t <to_version>
Usage: srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name> [-f]
Usage: srvctl start instance -d <db_unique_name> {-n <node_name> [-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]
Usage: srvctl stop instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>}  [-o <stop_options>] [-f]
Usage: srvctl status instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-f] [-v]
Usage: srvctl enable instance -d <db_unique_name> -i "<inst_name_list>"
Usage: srvctl disable instance -d <db_unique_name> -i "<inst_name_list>"
Usage: srvctl modify instance -d <db_unique_name> -i <inst_name> { -n <node_name> | -z }
Usage: srvctl remove instance -d <db_unique_name> -i <inst_name> [-f] [-y]
Usage: srvctl add service -d <db_unique_name> -s <service_name> {-r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}] | -g <pool_name>
 [-c {UNIFORM | SINGLETON}] } [-k   <net_num>] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}
] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <failover_retries>] [-w <failover_del
ay>] [-t <edition>] [-f]
Usage: srvctl add service -d <db_unique_name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"} [-f]
Usage: srvctl config service -d <db_unique_name> [-s <service_name>] [-v]
Usage: srvctl enable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
Usage: srvctl disable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
Usage: srvctl status service -d <db_unique_name> [-s "<service_name_list>"] [-f] [-v]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <avail_inst_name> -r [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -n -i "<preferred_list>" [-a "<available_list>"] [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> [-g <pool_name>] [-c {UNIFORM | SINGLETON}] [-P {BASIC|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY
][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {
NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <integer>] [-w <integer>] [-t <edition>]
Usage: srvctl relocate service -d <db_unique_name> -s <service_name> {-i <old_inst_name> -t <new_inst_name> | -c <current_node> -n <target_node>} [-f]
Usage: srvctl remove service -d <db_unique_name> -s <service_name> [-i <inst_name>] [-f]
Usage: srvctl start service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-o <start_options>]
Usage: srvctl stop service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-f]
Usage: srvctl add nodeapps { { -n <node_name> -A <name|ip>/<netmask>/[if1[|if2...]] } | { -S <subnet>/<netmask>/[if1[|if2...]] } } [-e <em-port>] [-l <ons-local
-port>]  [-r <ons-remote-port>] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
Usage: srvctl config nodeapps [-a] [-g] [-s]
Usage: srvctl modify nodeapps {[-n <node_name> -A <new_vip_address>/<netmask>[/if1[|if2|...]]] | [-S <subnet>/<netmask>[/if1[|if2|...]]]} [-u {static|dhcp|mixed
}] [-e <em-port>] [ -l <ons-local-port> ] [-r <ons-remote-port> ] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
Usage: srvctl start nodeapps [-n <node_name>] [-g] [-v]
Usage: srvctl stop nodeapps [-n <node_name>] [-g] [-f] [-r] [-v]
Usage: srvctl status nodeapps
Usage: srvctl enable nodeapps [-g] [-v]
Usage: srvctl disable nodeapps [-g] [-v]
Usage: srvctl remove nodeapps [-f] [-y] [-v]
Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-t "<name_list>"]
Usage: srvctl setenv nodeapps {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"} [-v]
Usage: srvctl unsetenv nodeapps -t "<name_list>" [-v]
Usage: srvctl add vip -n <node_name> -k <network_number> -A <name|ip>/<netmask>/[if1[|if2...]] [-v]
Usage: srvctl config vip { -n <node_name> | -i <vip_name> }
Usage: srvctl disable vip -i <vip_name> [-v]
Usage: srvctl enable vip -i <vip_name> [-v]
Usage: srvctl remove vip -i "<vip_name_list>" [-f] [-y] [-v]
Usage: srvctl getenv vip -i <vip_name> [-t "<name_list>"]
Usage: srvctl start vip { -n <node_name> | -i <vip_name> } [-v]
Usage: srvctl stop vip { -n <node_name>  | -i <vip_name> } [-f] [-r] [-v]
Usage: srvctl relocate vip -i <vip_name> [-n <node_name>] [-f] [-v]
Usage: srvctl status vip { -n <node_name> | -i <vip_name> } [-v]
Usage: srvctl setenv vip -i <vip_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"} [-v]
Usage: srvctl unsetenv vip -i <vip_name> -t "<name_list>" [-v]
Usage: srvctl add network [-k <net_num>] -S <subnet>/<netmask>/[if1[|if2...]] [-w <network_type>] [-v]
Usage: srvctl config network [-k <network_number>]
Usage: srvctl modify network [-k <network_number>] [-S <subnet>/<netmask>[/if1[|if2...]]] [-w <network_type>] [-v]
Usage: srvctl remove network {-k <network_number> | -a} [-f] [-v]
Usage: srvctl add asm [-l <lsnr_name>]
Usage: srvctl start asm [-n <node_name>] [-o <start_options>]
Usage: srvctl stop asm [-n <node_name>] [-o <stop_options>] [-f]
Usage: srvctl config asm [-a]
Usage: srvctl status asm [-n <node_name>] [-a] [-v]
Usage: srvctl enable asm [-n <node_name>]
Usage: srvctl disable asm [-n <node_name>]
Usage: srvctl modify asm [-l <lsnr_name>] 
Usage: srvctl remove asm [-f]
Usage: srvctl getenv asm [-t <name>[, ...]]
Usage: srvctl setenv asm -t "<name>=<val> [,...]" | -T "<name>=<value>"
Usage: srvctl unsetenv asm -t "<name>[, ...]"
Usage: srvctl start diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl stop diskgroup -g <dg_name> [-n "<node_list>"] [-f]
Usage: srvctl status diskgroup -g <dg_name> [-n "<node_list>"] [-a] [-v]
Usage: srvctl enable diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl disable diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl remove diskgroup -g <dg_name> [-f]
Usage: srvctl add listener [-l <lsnr_name>] [-s] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-o <oracle_home>] [-k <
Usage: srvctl config listener [-l <lsnr_name>] [-a]
Usage: srvctl start listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl stop listener [-l <lsnr_name>] [-n <node_name>] [-f]
Usage: srvctl status listener [-l <lsnr_name>] [-n <node_name>] [-v]
Usage: srvctl enable listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl disable listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl modify listener [-l <lsnr_name>] [-o <oracle_home>] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-u <or
Usage: srvctl remove listener [-l <lsnr_name> | -a] [-f]
Usage: srvctl getenv listener [-l <lsnr_name>] [-t <name>[, ...]]
Usage: srvctl setenv listener [-l <lsnr_name>] -t "<name>=<val> [,...]" | -T "<name>=<value>"
Usage: srvctl unsetenv listener [-l <lsnr_name>] -t "<name>[, ...]"
Usage: srvctl add scan -n <scan_name> [-k <network_number>] [-S <subnet>/<netmask>[/if1[|if2|...]]]
Usage: srvctl config scan [-i <ordinal_number>]
Usage: srvctl start scan [-i <ordinal_number>] [-n <node_name>]
Usage: srvctl stop scan [-i <ordinal_number>] [-f]
Usage: srvctl relocate scan -i <ordinal_number> [-n <node_name>]
Usage: srvctl status scan [-i <ordinal_number>] [-v]
Usage: srvctl enable scan [-i <ordinal_number>]
Usage: srvctl disable scan [-i <ordinal_number>]
Usage: srvctl modify scan -n <scan_name>
Usage: srvctl remove scan [-f] [-y]
Usage: srvctl add scan_listener [-l <lsnr_name_prefix>] [-s] [-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]] 
Usage: srvctl config scan_listener [-i <ordinal_number>]
Usage: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]
Usage: srvctl stop scan_listener [-i <ordinal_number>] [-f]
Usage: srvctl relocate scan_listener -i <ordinal_number> [-n <node_name>]
Usage: srvctl status scan_listener [-i <ordinal_number>] [-v]
Usage: srvctl enable scan_listener [-i <ordinal_number>]
Usage: srvctl disable scan_listener [-i <ordinal_number>]
Usage: srvctl modify scan_listener {-u|-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]} 
Usage: srvctl remove scan_listener [-f] [-y]
Usage: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"] [-f]
Usage: srvctl config srvpool [-g <pool_name>]
Usage: srvctl status srvpool [-g <pool_name>] [-a]
Usage: srvctl status server -n "<server_list>" [-a]
Usage: srvctl relocate server -n "<server_list>" -g <pool_name> [-f]
Usage: srvctl modify srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"] [-f]
Usage: srvctl remove srvpool -g <pool_name>
Usage: srvctl add oc4j [-v]
Usage: srvctl config oc4j
Usage: srvctl start oc4j [-v]
Usage: srvctl stop oc4j [-f] [-v]
Usage: srvctl relocate oc4j [-n <node_name>] [-v]
Usage: srvctl status oc4j [-n <node_name>] [-v]
Usage: srvctl enable oc4j [-n <node_name>] [-v]
Usage: srvctl disable oc4j [-n <node_name>] [-v]
Usage: srvctl modify oc4j -p <oc4j_rmi_port> [-v] [-f]
Usage: srvctl remove oc4j [-f] [-v]
Usage: srvctl start home -o <oracle_home> -s <state_file> -n <node_name>
Usage: srvctl stop home -o <oracle_home> -s <state_file> -n <node_name> [-t <stop_options>] [-f]
Usage: srvctl status home -o <oracle_home> -s <state_file> -n <node_name>
Usage: srvctl add filesystem -d <volume_device> -v <volume_name> -g <dg_name> [-m <mountpoint_path>] [-u <user>]
Usage: srvctl config filesystem -d <volume_device>
Usage: srvctl start filesystem -d <volume_device> [-n <node_name>]
Usage: srvctl stop filesystem -d <volume_device> [-n <node_name>] [-f]
Usage: srvctl status filesystem -d <volume_device> [-v]
Usage: srvctl enable filesystem -d <volume_device>
Usage: srvctl disable filesystem -d <volume_device>
Usage: srvctl modify filesystem -d <volume_device> -u <user>
Usage: srvctl remove filesystem -d <volume_device> [-f]
Usage: srvctl start gns [-l <log_level>] [-n <node_name>] [-v]
Usage: srvctl stop gns [-n <node_name>] [-f] [-v]
Usage: srvctl config gns [-a] [-d] [-k] [-m] [-n <node_name>] [-p] [-s] [-V] [-q <name>] [-l] [-v]
Usage: srvctl status gns [-n <node_name>] [-v]
Usage: srvctl enable gns [-n <node_name>] [-v]
Usage: srvctl disable gns [-n <node_name>] [-v]
Usage: srvctl relocate gns [-n <node_name>] [-v]
Usage: srvctl add gns -d <domain> -i <vip_name|ip> [-v]
Usage: srvctl modify gns {-l <log_level> | [-i <ip_address>] [-N <name> -A <address>] [-D <name> -A <address>] [-c <name> -a <alias>] [-u <alias>] [-r <address>
] [-V <name>] [-p <parameter>:<value>[,<parameter>:<value>...]] [-F <forwarded_domains>] [-R <refused_domains>] [-X <excluded_interfaces>] [-v]}
Usage: srvctl remove gns [-f] [-v]
Usage: srvctl add cvu [-t <check_interval_in_minutes>]
Usage: srvctl config cvu
Usage: srvctl start cvu [-n <node_name>]
Usage: srvctl stop cvu [-f]
Usage: srvctl relocate cvu [-n <node_name>]
Usage: srvctl status cvu [-n <node_name>]
Usage: srvctl enable cvu [-n <node_name>]
Usage: srvctl disable cvu [-n <node_name>]
Usage: srvctl modify cvu -t <check_interval_in_minutes>
Usage: srvctl remove cvu [-f]
[grid@his101:/home/grid]$srvctl status database -d hisdb
Instance hisdb1 is running on node his101
Instance hisdb2 is running on node his102
[grid@his101:/home/grid]$srvctl status instance -d hisdb -i hisdb1
Instance hisdb1 is running on node his101

[grid@his101:/home/grid]$srvctl config database
hisdb

[grid@his101:/home/grid]$srvctl config database -d hisdb
Database unique name: hisdb
Database name: hisdb
Oracle home: /oracle/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DGSYSTEM/hisdb/spfilehisdb.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: hisdb
Database instances: hisdb1,hisdb2
Disk Groups: DGSYSTEM,DGDATA01
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed
[grid@his101:/home/grid]$srvctl status asm
ASM is running on his102,his101
[grid@his101:/home/grid]$srvctl status asm -a
ASM is running on his102,his101
ASM is enabled.
[grid@his101:/home/grid]$srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): his102,his101
[grid@his101:/home/grid]$srvctl config nodeapps -a -g -s -l
Warning:-l option has been deprecated and will be ignored.
Network exists: 1/192.168.2.0/255.255.255.0/ens33, type static
VIP exists: /his101vip/192.168.2.103/192.168.2.0/255.255.255.0/ens33, hosting node his101
VIP exists: /his102vip/192.168.2.104/192.168.2.0/255.255.255.0/ens33, hosting node his102
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
  /oracle/app/11.2.0/grid on node(s) his101,his102
End points: TCP:1521
[grid@his101:/home/grid]$srvctl stop instance -d hisdb -i hisdb1
[grid@his101:/home/grid]$srvctl start instance -d hisdb -i hisdb1
[grid@his101:/home/grid]$crs_stop ora.hisdb.db
Attempting to stop `ora.hisdb.db` on member `his102`
Attempting to stop `ora.hisdb.db` on member `his101`
Stop of `ora.hisdb.db` on member `his101` succeeded.
Stop of `ora.hisdb.db` on member `his102` succeeded.
[grid@his101:/home/grid]$crs_start ora.hisdb.db
Attempting to start `ora.hisdb.db` on member `his101`
Attempting to start `ora.hisdb.db` on member `his102`
Start of `ora.hisdb.db` on member `his101` succeeded.
Start of `ora.hisdb.db` on member `his102` succeeded.
[grid@his101:/home/grid]$crs_stat -help
This command is deprecated and has been replaced by 'crsctl status resource'
This command remains for backward compatibility only

Usage:  crs_stat [resource_name [...]] [-v] [-l] [-q] [-c cluster_member]
        crs_stat [resource_name [...]] -t [-v] [-q] [-c cluster_member]
        crs_stat -p [resource_name [...]] [-q]
        crs_stat [-a] application -g
        crs_stat [-a] application -r [-c cluster_member]
        crs_stat -f [resource_name [...]] [-q] [-c cluster_member]
        crs_stat -ls [resource_name [...]] [-q]

11G不建议以下全停操作,停单个没问题:

[grid@his101:/home/grid]$crs_stop -all
CRS-2500: Cannot stop resource 'ora.gsd' as it is not running
Attempting to stop `ora.DGDATA01.dg` on member `his101`
Attempting to stop `ora.DGGRID1.dg` on member `his101`
Attempting to stop `ora.DGGRID2.dg` on member `his101`
Attempting to stop `ora.DGRECOVERY.dg` on member `his101`
Attempting to stop `ora.DGSYSTEM.dg` on member `his101`
Attempting to stop `ora.hisdb.db` on member `his101`
Attempting to stop `ora.DGDATA01.dg` on member `his102`
Attempting to stop `ora.DGGRID1.dg` on member `his102`
Attempting to stop `ora.DGGRID2.dg` on member `his102`
Attempting to stop `ora.DGRECOVERY.dg` on member `his102`
Attempting to stop `ora.DGSYSTEM.dg` on member `his102`
Attempting to stop `ora.hisdb.db` on member `his102`
Attempting to stop `ora.LISTENER.lsnr` on member `his102`
Attempting to stop `ora.ons` on member `his101`
Attempting to stop `ora.LISTENER.lsnr` on member `his101`
Attempting to stop `ora.oc4j` on member `his102`
Stop of `ora.LISTENER.lsnr` on member `his101` succeeded.
Attempting to stop `ora.his101.vip` on member `his101`
Stop of `ora.LISTENER.lsnr` on member `his102` succeeded.
Attempting to stop `ora.cvu` on member `his102`
Attempting to stop `ora.ons` on member `his102`
Attempting to stop `ora.his102.vip` on member `his102`
Attempting to stop `ora.LISTENER_SCAN1.lsnr` on member `his102`
Stop of `ora.cvu` on member `his102` succeeded.
Stop of `ora.LISTENER_SCAN1.lsnr` on member `his102` succeeded.
Attempting to stop `ora.scan1.vip` on member `his102`
Stop of `ora.DGRECOVERY.dg` on member `his101` succeeded.
Stop of `ora.ons` on member `his101` succeeded.
Stop of `ora.DGRECOVERY.dg` on member `his102` succeeded.
Stop of `ora.his101.vip` on member `his101` succeeded.
Attempting to stop `ora.net1.network` on member `his101`
Stop of `ora.net1.network` on member `his101` succeeded.
Stop of `ora.ons` on member `his102` succeeded.
Stop of `ora.his102.vip` on member `his102` succeeded.
Stop of `ora.scan1.vip` on member `his102` succeeded.
Attempting to stop `ora.net1.network` on member `his102`
Stop of `ora.net1.network` on member `his102` succeeded.
Stop of `ora.oc4j` on member `his102` succeeded.
Stop of `ora.DGGRID1.dg` on member `his101` succeeded.
Stop of `ora.DGGRID2.dg` on member `his101` succeeded.
Stop of `ora.DGGRID2.dg` on member `his102` succeeded.
Stop of `ora.DGGRID1.dg` on member `his102` succeeded.
CRS-5017: The resource action "ora.DGDATA01.dg stop" encountered the following error: 
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "DGDATA01" precludes its dismount
. For details refer to "(:CLSN00108:)" in "/oracle/app/11.2.0/grid/log/his101/agent/crsd/oraagent_grid//oraagent_grid.log".

CRS-5017: The resource action "ora.DGSYSTEM.dg stop" encountered the following error: 
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "DGSYSTEM" precludes its dismount
. For details refer to "(:CLSN00108:)" in "/oracle/app/11.2.0/grid/log/his101/agent/crsd/oraagent_grid//oraagent_grid.log".

CRS-5017: The resource action "ora.DGDATA01.dg stop" encountered the following error: 
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "DGDATA01" precludes its dismount
. For details refer to "(:CLSN00108:)" in "/oracle/app/11.2.0/grid/log/his102/agent/crsd/oraagent_grid//oraagent_grid.log".

CRS-5017: The resource action "ora.DGSYSTEM.dg stop" encountered the following error: 
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "DGSYSTEM" precludes its dismount
. For details refer to "(:CLSN00108:)" in "/oracle/app/11.2.0/grid/log/his102/agent/crsd/oraagent_grid//oraagent_grid.log".

Stop of `ora.hisdb.db` on member `his101` succeeded.
Attempting to stop `ora.DGDATA01.dg` on member `his101`
Attempting to stop `ora.DGSYSTEM.dg` on member `his101`
Stop of `ora.DGDATA01.dg` on member `his101` succeeded.
Stop of `ora.DGSYSTEM.dg` on member `his101` succeeded.
Attempting to stop `ora.asm` on member `his101`
Stop of `ora.hisdb.db` on member `his102` succeeded.
Attempting to stop `ora.DGDATA01.dg` on member `his102`
Attempting to stop `ora.DGSYSTEM.dg` on member `his102`
Stop of `ora.DGDATA01.dg` on member `his102` succeeded.
Stop of `ora.DGSYSTEM.dg` on member `his102` succeeded.
Attempting to stop `ora.asm` on member `his102`

一些常用的SQL命令

select distinct owner from all_objects;
SELECT username,decode(password,NULL,'NULL',password) password FROM dba_users;

SELECT name,password FROM v$user WHERE name='SCOTT';

SELECT * FROM DBA_USERS_WITH_DEFPWD WHERE username='test';

SELECT * FROM DBA_USERS_WITH_DEFPWD WHERE username='SCOTT';

第四部分:补丁的操作


打补丁:

http://www.bubuko.com/infodetail-3351356.html(可参考)


grid:
oracle:
1)下载补丁并上传服务器,解压
2)停止应用和服务
3)生产环境一定要备份(停止数据库与集群,备份安装代码,备份数据库)
4)安装补丁工具opatch
5)在线打补丁(把数据库停止,但集群一定要打开)
6)验证
7)启动应用测试。

chown -R grid:oinstall /backup/psu(两个节点都要改)
[root@his101 psu]# ll
total 110468
drwxr-x--- 15 grid oinstall      4096 Apr 12  2019 OPatch
-rw-r--r--  1 grid oinstall 113112960 Mar 22 04:51 p6880880_112000_Linux-x86-64.zip
su - grid
cd /backup/psu
[grid@his101:/backup/psu]$unzip p6880880_112000_Linux-x86-64.zip
[grid@his101:/backup/psu]$unzip p30070097_112040_Linux-x86-64.zip

在第二个节点创建目录,将补丁拷过去:

[root@his102 ~]# mkdir /backup/psu
[root@his102 ~]# chown -R grid:oinstall /backup
[root@his102 ~]# su - grid
Last login: Sun Mar 22 08:11:35 CST 2020 on pts/0
[grid@his102:/home/grid]$cd /backup/psu
[grid@his102:/backup/psu]$scp his101:/backup/psu/p*zip .
[grid@his102:/backup/psu]$unzip p6880880_112000_Linux-x86-64.zip
[grid@his102:/backup/psu]$unzip p30070097_112040_Linux-x86-64.zip

安装opatch工具,两个节点grid、oracle用户都需要安装
grid用户:

[root@his101 backup]# cd /oracle/app/11.2.0/grid/
[root@his101 grid]# mv OPatch OPatch-bak
[root@his101 grid]# cd /oracle/app/11.2.0/grid/OPatch-bak/
[root@his101 OPatch-bak]# ./opatch version
OPatch Version: 11.2.0.3.4

OPatch succeeded.
[root@his101 grid]# cp -R /backup/psu/OPatch/ .
[root@his101 grid]# chown -R grid:oinstall OPatch
[root@his101 grid]# cd OPatch
[root@his101 OPatch]# ./opatch version
OPatch Version: 11.2.0.3.21

OPatch succeeded.

oracle用户:

[root@his101 OPatch]# su - oracle 
[oracle@his101:/home/oracle]$cd $ORACLE_HOME
[oracle@his101:/oracle/app/oracle/product/11.2.0/db_1]$exit
[root@his101 OPatch]# cd /oracle/app/oracle/product/11.2.0/db_1/
[root@his101 db_1]# cd OPatch
[root@his101 OPatch-bak]# ./opatch version
OPatch Version: 11.2.0.3.4

OPatch succeeded.
[root@his101 OPatch-bak]# cd ..
[root@his101 db_1]# cp -R /backup/psu/OPatch/ .
[root@his101 db_1]# chown -R oracle:oinstall OPatch
[root@his101 db_1]# cd OPatch
[root@his101 OPatch]# ./opatch version
OPatch Version: 11.2.0.3.21

OPatch succeeded.

第二台:
grid用户:

[root@his102 ~]# su - grid
[grid@his102:/home/grid]$cd $ORACLE_HOME
[grid@his102:/oracle/app/11.2.0/grid]$exit
[root@his102 ~]# cd /oracle/app/11.2.0/grid/
[root@his102 grid]# cd OPatch/
[root@his102 OPatch]# ./opath version
-bash: ./opath: No such file or directory
[root@his102 OPatch]# ./opatch version
OPatch Version: 11.2.0.3.4

OPatch succeeded.
[root@his102 ~]# cd ..
[root@his102 grid]# mv OPatch OPatch-bak
[root@his102 grid]# cp -R /backup/psu/OPatch/ .
[root@his102 grid]# chown -R grid:oinstall OPatch
[root@his102 grid]# cd OPatch
[root@his102 OPatch]# ./optch version
OPatch Version: 11.2.0.3.21

OPatch succeeded.

oracle用户:

[root@his102 db_1]# su - oracle 
[oracle@his102:/home/oracle]$cd $ORACLE_HOME
[oracle@his102:/oracle/app/oracle/product/11.2.0/db_1]$exit
[root@his102 db_1]# cd /oracle/app/oracle/product/11.2.0/db_1
[root@his102 db_1]# cd OPatch/
[root@his102 OPatch]# ./opatch version
OPatch Version: 11.2.0.3.4

OPatch succeeded.
[root@his102 OPatch]# cd ..
[root@his102 db_1]# mv OPatch OPatch-bak
[root@his102 db_1]# cp -R /backup/psu/OPatch .
[root@his102 db_1]# chown -R oracle:oinstall OPatch
[root@his102 db_1]# cd OPatch
[root@his102 OPatch]# ./opatch version
OPatch Version: 11.2.0.3.21

OPatch succeeded.

检查是否升级成功:
gird:

/oracle//app/11.2.0/grid/OPatch/opatch version

oracle:

/oracle//app/oracle/product/11.2.0/db_1/OPatch/opatch version

生成OCM文件,只需要在第一台主机上生成。

[root@his101 bin]# su - grid

[grid@his101:/home/grid]$/oracle/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp 
OCM Installation Response Generator 10.3.7.0.0 - Production
Copyright (c) 2005, 2012, Oracle and/or its affiliates.  All rights reserved.

Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name: 

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y
The OCM configuration response file (ocm.rsp) was successfully created.
[grid@his101:/home/grid]$ls /oracle/app/11.2.0/grid/OPatch/ocm/bin/ -l
total 12
-rwxr-x--- 1 grid oinstall 9063 Mar 22 08:55 emocmrsp

[root@his101 psu]# /oracle/app/11.2.0/grid/OPatch/opatch auto /backup/psu/30070097 -ocmrf /oracle/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp
Executing /oracle/app/11.2.0/grid/perl/bin/perl /oracle/app/11.2.0/grid/OPatch/crs/patch11203.pl -patchdir /backup/psu -patchn 30070097 -ocmrf /oracle/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp -paramfile /oracle/app/11.2.0/grid/crs/install/crsconfig_params

This is the main log file: /oracle/app/11.2.0/grid/cfgtoollogs/opatchauto2020-03-22_09-55-19.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/oracle/app/11.2.0/grid/cfgtoollogs/opatchauto2020-03-22_09-55-19.report.log

2020-03-22 09:55:19: Starting Clusterware Patch Setup
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params

Stopping RAC /oracle/app/oracle/product/11.2.0/db_1 ...
Stopped RAC /oracle/app/oracle/product/11.2.0/db_1 successfully

patch /backup/psu/30070097/29938455/custom/server/29938455  apply successful for home  /oracle/app/oracle/product/11.2.0/db_1

patch /backup/psu/30070097/29913194  apply successful for home  /oracle/app/oracle/product/11.2.0/db_1 

Stopping CRS...
Stopped CRS successfully

patch /backup/psu/30070097/29938455  apply successful for home  /oracle/app/11.2.0/grid 
patch /backup/psu/30070097/29913194  apply successful for home  /oracle/app/11.2.0/grid 
patch /backup/psu/30070097/29509309  apply successful for home  /oracle/app/11.2.0/grid 

Starting CRS...
Installing Trace File Analyzer
CRS-4123: Oracle High Availability Services has been started.
Oracle Grid Infrastructure stack start initiated but failed to complete at /backup/psu/30070097/29938455/files/crs/install/crsconfig_lib.pm line 11821.

指定目录进行打补丁,避免出现失败
grid:

/oracle/app/11.2.0/grid/OPatch/opatch auto /backup/psu/30070097 -oh /oracle/app/11.2.0/grid -ocmrf /oracle/app/11.2.0/grid/OPatch/ocm/bin/ocm.rsp  

oracle:

/oracle/app/oracle/product/11.2.0/OPatch/opatch auto /backup/psu/30070097 -oh /oracle/app/oracle/product/11.2.0/db_1 -ocmrf /oracle/app/11.2.0/grid/OPatch/ocm/bin/ocm.rsp

/oracle/app/11.2.0/grid/OPatch/opatch auto /backup/psu/30070097 -ocmrf /oracle/app/11.2.0/grid/OPatch/ocm/bin/ocm.rsp

第二个节点用这个命令打没问题:

/oracle/app/11.2.0/grid/OPatch/opatch auto /backup/psu/30070097 -ocmrf /backup/psu/ocm.rsp

[oracle@his102:/home/oracle]$$ORACLE_HOME/OPatch/opatch lspatches


附:RAC地址的变更

1 修改public IP及VIP 地址

  1. 两个节点都关闭数据库实例
  sqlplus / as sysdba
  shutdown immediate;
  1. 两个节点都关闭数据库集群
  /oracle/app/11.2.0/grid/bin/crsctl stop has
  1. 修改/etc/hosts文件
  #public ip
  192.168.1.101 his101
  192.168.1.102 his102

  #priv ip
  10.10.10.201  his101priv
  10.10.10.202  his102priv

  #vip ip
  192.168.1.103 his101vip
  192.168.1.104 his102vip

  #scan ip
  192.168.1.100 hisscan
  1. 两个节点都修改网卡地址:
  [root@his101 network-scripts]# cd /etc/sysconfig/network-scripts
  [root@his101 network-scripts]# vi ifcfg-bond1.1049
  DEVICE=bond1.1049
  ONBOOT=yes
  BOOTPROTO=static
  VLAN=yes
  IPADDR=192.168.1.101
  PREFIX=24
  GATEWAY=192.168.1.254
  [root@his102 network-scripts]# cd /etc/sysconfig/network-scripts
  [root@his102 network-scripts]# vi ifcfg-bond1.1049
  DEVICE=bond1.1049
  ONBOOT=yes
  BOOTPROTO=static
  VLAN=yes
  IPADDR=192.168.1.102
  PREFIX=24
  GATEWAY=192.168.1.254
  1. 重启网口bond1.1049 (需要一个节点完成之后,再执行另外一个节点,同时另外一个节点通过登入心跳网络连接数据库服务器主机)
  ifdown bond1.1049
  ifup bond1.1049
  1. 启动数据库集群
  /oracle/app/11.2.0/grid/bin/crsctl start has

2 修改SCAN IP

  1. 核查当前scan设置;(grid执行)
  $GRID_HOME/bin/srvctl config scan
  1. 停止scan listener和scan
  $GRID_HOME/bin/srvctl stop scan_listener 
  $GRID_HOME/bin/srvctl stop scan
  $GRID_HOME/bin/srvctl status scan
  1. 修改scan IP (root执行)
  /oracle/app/11.2.0/grid/bin/srvctl modify scan -n hisscan
  1. 确认Scan ip修改是否正确
  $GRID_HOME/bin/srvctl config scan
  1. 更新scan listener
  $GRID_HOME/bin/srvctl modify scan_listener -u
  1. 重启san和scan listener
  $GRID_HOME/bin/srvctl start scan 
  $GRID_HOME/bin/srvctl start scan_listener
  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值