Oracle database 19c 集群部署文档(RAC)
1. 部署系统版本
本次部署系统为ORACLE LINUX 7.8(rac1、rac2).
2. 资源规划
2.1主机资源
主机名 | 别名 | 地址类型 | IP | 文件路径 |
rac1 | rac1 | 公网/public | 10.138.2.10 | /etc/hosts |
rac1 | rac1-vip | 虚拟/virtual | 10.138.2.11 | /etc/hosts |
rac1 | rac1-priv | 私网/private | 10.138.3.10 | /etc/hosts |
rac1 | rac1-openfilr | 10.138.4.10 | /etc/hosts | |
rac2 | rac2 | 公网/public | 10.138.2.20 | /etc/hosts |
rac2 | rac2-vip | 虚拟/virtual | 10.138.2.21 | /etc/hosts |
rac2 | rac2-priv | 私网/private | 10.138.3.20 | /etc/hosts |
rac2 | rac2-openfilr | 10.138.4.20 | /etc/hosts | |
openfiler | openfiler | 存储 | 10.138.4.100 | |
rac1/rac2 | rac-scan | scan-ip | 10.138.2.100 | /etc/hosts |
2.2共享磁盘及asm资源划分
asm磁盘组 | 用途 | 大小 | 冗余 |
crs | ocr、voting file、19c管理库 | 20g+20G+20g | normal |
data | 数据文件 | 50g+50g | normal |
reco | reco | 50g | external |
3. 环境准备
3.1主机环境准备
(1)修改主机名rac1、rac2
[root@localhost ~]# hostname
localhost.localdomain
[root@localhost ~]# hostnamectl set-hostname rac1
[root@localhost ~]# hostname
(2)关闭防火墙rac1、rac2
[root@localhost ~]# systemctl status firewalld.service
[root@localhost ~]# systemctl stop firewalld.service
[root@localhost ~]# systemctl disable firewalld.service
(3)关闭selinux rac1、rac2
[root@localhost selinux]# vim /etc/selinux/config
将SELINUX改成disabled;
[root@localhost selinux]# setenforce 0—临时关闭
(4)修改hosts文件rac1、rac2
#public ip
10.138.2.10 rac1
10.138.2.20 rac2
#private ip
10.138.3.10 rac1-priv
10.138.3.20 rac2-priv
#virtual ip
10.138.2.11 rac1-vip
10.138.2.21 rac2-vip
#sacnip
10.138.2.100 rac-scan
#openfiler
10.138.4.100 openfiler
(5)修改内核参数rac1、rac2
# vim /etc/sysctl.conf
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
(6)修改限制文件rac1、rac2
# vim /etc/security/limits.conf
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock 134217728
oracle soft memlock 134217728
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 16384
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid soft memlock 134217728
grid hard memlock 134217728
(7)安装软件包rac1、rac2
yum install -y binutils compat-libstdc++-33 compat-libstdc++-33.i686 gcc gcc-c++ glibc glibc.i686 glibc-devel glibc-devel.i686 ksh libgcc libgcc.i686 libstdc++ libstdc++.i686 libstdc++-devel libstdc++-devel.i686 libaio libaio.i686 libaio-devel libaio-devel.i686 libXext libXext.i686 libXtst libXtst.i686 libX11 libX11.i686 libXau libXau.i686 libxcb libxcb.i686 libXi libXi.i686 make sysstat unixODBC unixODBC-devel zlib-devel zlib-devel.i686 compat-libcap1
[base]
name=BaseOS
baseurl=file:///mnt/cdrom/BaseOS
enabled=1
gpgcheck=0
[AppStream]
name=AppStream
baseurl=file:///mnt/cdrom/AppStream
enabled=1
gpgcheck=0
(8)关闭时间同步 rac1、rac2
使用rac自带的ctssd时间同步进程,关闭系统时间同步
[root@localhost yum.repos.d]# systemctl stop chronyd
[root@localhost yum.repos.d]# systemctl disable chronyd
[root@localhost yum.repos.d]# mv /etc/chrony.conf /etc/chrony.conf.bak
(9)关闭透明大页ntp rac1、rac2
/sys/kernel/mm/transparent_hugepage/enabled
[root@localhost soft]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
Always表示开启;never未开启
[root@localhost soft]# vim /etc/rc.d/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
[root@localhost soft]# chmod +x /etc/rc.d/rc.local
(10)禁用ntp rac1、rac2
[root@localhost yum.repos.d]# systemctl stop ntpd.service
[root@localhost yum.repos.d]# systemctl disable ntpd.service
[root@localhost yum.repos.d]# mv /etc/ntp.conf /etc/ntp.conf.orig
(11)禁用avahi rac1、rac2
[root@localhost yum.repos.d]# systemctl disable avahi-daemon.service
[root@localhost yum.repos.d]# systemctl disable avahi-daemon.socket
[root@rac2 ~]# systemctl stop avahi-daemon.socket
[root@rac2 ~]# systemctl stop avahi-daemon.service
(12)修改NOZEROCONF rac1、rac2
[root@rac1 ~]# vim /etc/sysconfig/network
NOZEROCONF=yes
3.2主机硬件检查
(1)物理内存检查 rac1、rac2
[root@rac1 ~]# grep MemTotal /proc/meminfo
MemTotal: 8466512 kB
[root@rac1 ~]# free -h
物资内存需在8GB以上;
(2)swap检查 rac1、rac2
[root@rac1 ~]# grep SwapTotal /proc/meminfo
SwapTotal: 9215996 kB
[root@rac1 ~]# free -h
Swap建议在8GB之上;
如swap不足,可进行如下操作:
[root@rac1 ~]# dd if=/dev/zero of=/soft/swap count=1024 bs=1M
建立一个1024大小的空文件;
[root@rac1 soft]# mkswap /soft/swap
[root@rac1 soft]# swapon /soft/swap
[root@rac1 soft]# swapoff /soft/swap
在[root@rac1 soft]# vim /etc/fstab进行开机挂载;
/soft/swap swap swap defaults 0 0
(3)文件系统检查 rac1、rac2
[root@rac1 soft]# df -h
3.3用户环境
(1)创建用户和组 rac1、rac2
groupadd -g 54321 oinstall
groupadd -g 54322 dba
groupadd -g 54323 oper
groupadd -g 54324 backupdba
groupadd -g 54325 dgdba
groupadd -g 54326 kmdba
groupadd -g 54327 asmdba
groupadd -g 54328 asmoper
groupadd -g 54329 asmadmin
groupadd -g 54330 racdba
[root@rac1 soft]# useradd -u 54321 -g oinstall -G dba,oper,backupdba,dgdba,kmdba,asmdba,racdba oracle
[root@rac1 soft]# useradd -u 54322 -g oinstall -G dba,asmdba,asmoper,asmadmin,racdba grid
[root@rac1 soft]# passwd oracle
[root@rac1 soft]# passwd grid
组 角色 权限
oinstall 安装和升级oracle软件
dba sysdba 创建、删除、修改、启动、关闭数据库,切换日志归档模式,备份恢复数据库
oper sysoper 启动、关闭、修改、备份、恢复数据库,修改归档模式
asmdba sysdba自动存储管理 管理ASM实例
asmoper sysoper自动存储管理 启动、停止ASM实例
asmadmin sysasm 挂载、卸载磁盘组,管理其他存储设备
backupdba sysbackup 启动关闭和执行备份恢复(12c)
dgdba sysdg 管理Data Guard(12c)
kmdba syskm 加密管理相关操作
racdba rac管理
(2)创建安装目录 rac1、rac2
mkdir -p /u01/app/grid
mkdir -p /u01/app/19.3.0.0/grid
mkdir -p /u01/app/oracle/product/19.3.0.0/db_1
chown -R oracle:oinstall /u01/
chown -R grid:oinstall /u01/app/19.3.0.0/grid
chown -R grid:oinstall /u01/app/grid
chmod -R 775 /u01/
(3)编辑profile rac1、rac2
grid用户
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=rac1/rac2
export ORACLE_BASE=/u01/app/grid
export GRID_HOME=/u01/app/19.3.0.0/grid
export DB_HOME=$ORACLE_BASE/product/19.3.0.0/db_1
export ORACLE_HOME=$GRID_HOME
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
oracle用户
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=rac2
export ORACLE_UNQNAME=rac
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/19.3.0.0/grid
export DB_HOME=$ORACLE_BASE/product/19.3.0.0/db_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=rac2
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
(4)配置互信 rac1、rac2
在两个节点的oracle和grid用户下都执行:
cd
rm -rf ~/.ssh
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
在第一个节点的oracle用户和grid用户都执行:
cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac1:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac1-priv:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac2-priv:~/.ssh/authorized_keys
在第两个节点的oracle用户和grid用户都执行:
cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac1:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac1-priv:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac2-priv:~/.ssh/authorized_keys
验证:
ssh rac1 date;ssh rac2 date;ssh rac1-priv date;ssh rac2-priv date
3.4配置共享
(1)配置iscsi rac1、rac2
[root@rac2 ~]# rpm -qa|grep iscsi
libiscsi-1.9.0-7.el7.x86_64
libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64
iscsi-initiator-utils-6.2.0.874-17.0.1.el7.x86_64
iscsi-initiator-utils-iscsiuio-6.2.0.874-17.0.1.el7.x86_64
[root@rac2 ~]# systemctl status iscsid
[root@rac2 ~]# systemctl enable iscsid
[root@rac2 ~]# systemctl start iscsid
(2)配置iscsi连接 rac1、rac2
[root@rac1 ~]# iscsiadm -m discovery -t sendtargets -p 10.138.4.100 ##发现
[root@rac1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.rac -p 10.138.4.100 -l 自动登录
iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.asm -p 10.138.4.100:3260 -u取消
(3)设置多路径rac1、rac2
[root@rac2 ~]# yum -y install device-mapper device-mapper-multipath
[root@rac1 etc]# systemctl status multipathd.service
[root@rac1 etc]# systemctl start multipathd.service
[root@rac1 etc]# systemctl enable multipathd.service
[root@rac2 ~]# cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc
[root@rac2 rules.d]# ls -l /dev/disk/by-id/
lrwxrwxrwx 1 root root 9 Oct 1 10:58 scsi-14f504e46494c45524869464c58672d657546642d504d496d -> ../../sdg
lrwxrwxrwx 1 root root 9 Oct 1 10:58 scsi-14f504e46494c455248724b6366712d633149612d305a5033 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 1 10:58 scsi-14f504e46494c4552576d6e615a312d504234692d554d4b42 -> ../../sdf
lrwxrwxrwx 1 root root 9 Oct 1 10:58 scsi-14f504e46494c45525a474a6464592d565239662d41546152 -> ../../sde
lrwxrwxrwx 1 root root 9 Oct 1 10:58 scsi-14f504e46494c45526e4a506368472d764953752d45515864 -> ../../sdb
lrwxrwxrwx 1 root root 9 Oct 1 10:58 scsi-14f504e46494c45527a46625075512d6c7166382d6f4e7852 -> ../../sdd
scdi后面即是设备的UUID
将multipath.conf全清空,添加:
multipaths {
multipath {
wwid 14f504e46494c45526e4a506368472d764953752d45515864
alias crs1
uid 54322
gid 54321
}
multipath {
wwid 14f504e46494c455248724b6366712d633149612d305a5033
alias crs2
uid 54322
gid 54321
}
multipath {
wwid 14f504e46494c45527a46625075512d6c7166382d6f4e7852
alias crs3
uid 54322
gid 54321
}
multipath {
wwid 14f504e46494c45525a474a6464592d565239662d41546152
alias data1
uid 54322
gid 54321
}
multipath {
wwid 14f504e46494c4552576d6e615a312d504234692d554d4b42
alias data2
uid 54322
gid 54321
}
multipath {
wwid 14f504e46494c45524869464c58672d657546642d504d496d
alias reco
uid 54322
gid 54321
}
}
[root@rac2 rules.d]# ls -l /dev/disk/by-path/设备的对应关系;
重启multipathd.service服务
此时验证fdisk -l会出现Disk /dev/mapper/data2等设备;
[root@rac1 ~]# ls -l /dev/mapper/
查看磁盘的用户清空,目前属于root;
修改[root@rac1 ~]# vim /etc/udev/rules.d/99-oracle-asmdevices.rules文件,加入
vim /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-14f504e46494c45524869464c58672d657546642d504d496d",ACTION=="add|change",OWNER:="grid",GROUP:="asmadmin",MODE:="660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-14f504e46494c4552576d6e615a312d504234692d554d4b42",ACTION=="add|change",OWNER:="grid",GROUP:="asmadmin",MODE:="660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-14f504e46494c45525a474a6464592d565239662d41546152",ACTION=="add|change",OWNER:="grid",GROUP:="asmadmin",MODE:="660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-14f504e46494c45527a46625075512d6c7166382d6f4e7852",ACTION=="add|change",OWNER:="grid",GROUP:="asmadmin",MODE:="660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-14f504e46494c455248724b6366712d633149612d305a5033",ACTION=="add|change",OWNER:="grid",GROUP:="asmadmin",MODE:="660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-14f504e46494c45526e4a506368472d764953752d45515864",ACTION=="add|change",OWNER:="grid",GROUP:="asmadmin",MODE:="660"
重启后查看所属用户:
[root@rac2 ~]# ls -l /dev| grep dm-*
[root@rac2 ~]# multipath -ll 显示
[root@rac2 ~]# multipath -F 清除
[root@rac2 ~]# multipath -v2 重新挂载
4.grid软件安装
[root@rac1 ~]# su – grid
4.1 解压文件rac1
[grid@rac1 grid]$ unzip /soft/V982068-01.zip -d $ORACLE_HOME
4.2执行安装检查rac1
[grid@rac1 grid]$ $ORACLE_HOME/oui/prov/resources/scripts/sshUserSetup.sh -user grid -hosts "rac1 rac2" -advanced -noPromptPassphrase
安装oracle19c RAC 配置互信的时候setup显示成功,点test的时候报错INS-44000,继续点yes 报错INS-32070,继续往下点 报错INS-06006
如果OpenSSH升级到8.x会有这个问题
[root@rac1 ~]# ssh -V
[root@rac1 ~]# mv /usr/bin/scp /usr/bin/scp.orig
[root@rac1 ~]# vi /usr/bin/scp
/usr/bin/scp.orig -T $*
[root@rac1 ~]# chmod 555 /usr/bin/scp
安装完成后,改回来
[root@rac1 ~]# mv /usr/bin/scp.orig /usr/bin/scp
[grid@rac1 grid]$ cd $ORACLE_HOME
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2-verbose
4.3 安装rpm包rac1、rac2
安装cvuqdisk-1.0.10-1.rpm
[root@rac1 rpm]# rpm -ivh /u01/app/19.3.0.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm
设置本地显示:
[grid@rac1 grid]$ export DISPLAY=10.138.2.1:0.0
[grid@rac1 grid]$ xhost +
4.3 安装grid rac1
4.4 报错
INS-20802 Oracle Cluster Verification Utility failed
这是由于ORACLE推荐将SCAN写在DNS中,这里没用DNS,而将SCAN写在了hosts文件中,该错误可以忽略,不影响RAC后续的安装和使用。
4.5 验证集群服务
[grid@rac1 ~]$ crsctl status resource -t
5.database软件安装
5.1创建磁盘组
[grid@rac1 ~]$ asmca
5.2安装数据库
仅安装oracle database,后通过dbca手工建库;