VMware虚拟机19c RAC+Oracle Linux 7.9安装手册

目录

第一章 整体规划

1.1 拓扑结构

1.2 主机规划信息

1.3 IP规划信息

1.4 存储规划信息

1.5 数据库规划信息

整体数据库安装规划

第二章 操作系统安装及配置

2.1 创建虚拟机

2.2 OS安装

2.2.1 服务器配置信息表

2.2.2 安装注意事项

2.3 OS配置

2.3.1 ip地址配置

2.3.2 hosts解析文件配置

2.3.3 创建并挂载共享磁盘

2.3.4 关闭防火墙和selinux

2.3.5 调整network参数

2.3.6 调整/dev/shm

2.3.7 关闭THP和numa

2.3.8 软件包安装

2.3.9 禁用NTP和chrony时间服务器

2.3.10 修改主机参数

2.3.11 禁用不必要的服务

2.3.12 创建用户、目录

2.3.13 环境变量

2.3.14 其他参数修改

2.3.15 设置用户资源限制

2.3.16 配置时间同步

2.3.17 关闭两台主机,并开启共享目录

2.3.18 使用udev配置共享磁盘

2.3.19 安装GI软件

第三章 Grid安装

3.1 安装前检测

3.2 Grid安装

3.2.1 安装执行

3.2.2 集群检测

第四章 ASM管理磁盘组

4.1 ASMCA新建磁盘组

第五章 Oracle软件安装

5.1 解压软件包

5.2 图形界面安装

第六章 建库

6.1 dbca建库

========================================================================

整体规划

拓扑结构

1.1主机规划信息

Hostname

OS

DB

Role

CPU

RAM

Network

dkf19c01

OEL 7.9

19.3.0.0

node 1

2*2

4G

2

dkf19c02

OEL 7.9

19.3.0.0

node 2

2*2

4G

2

1.2 IP规划信息

Node

IPADDR

NAME

dkf19c01

10.0.0.111

dkf19c01

10.0.0.115

dkf19c01-vip

192.168.195.111

dkf19c01-priv

dkf19c02

10.0.0.112

dkf19c02

10.0.0.116

dkf19c02-vip

192.168.195.112

dkf19c02-priv

10.0.0.117

dkf19c-scan

10.0.0.118

10.0.0.119

1.3存储规划信息

操作系统分区

分区

大小

/boot

1G

/

10G

/tmp

10G

SWAP

8G

/u01

50G

共享磁盘信息

共享存储

分区

大小

数量

共享盘

DATA

10G

2

OCR

1G

3

1.4数据库规划信息

软件信息

RAC

类型

软件名称

备注

dkf19c

操作系统

oracle linux 7.9

 

数据库

oracle 19.0.0.0

 

数据库服务器存储策略

项目

细分

容量

存储目录

存储系统

配置要求

Clusterware软件

 本地容量

 /u01

文件系统

 

OCRvoting disk

 31GB

 +OCR

 ASM

 normal模式

数据库软件

 本地容量

 /u01

文件系统

 

数据库文件 DATA

 共20GB

 +DATA

 ASM

 外部冗余模式

新建Group

组名

GID

备注

oinstall

54421

Oracle清单和软件所有者

dba

54322

数据库管理员

oper

54323

DBA操作员组

backupdba

54324

备份管理员

dgdba

54325

DG管理员

kmdba

54326

KM管理员

asmdba

54327

ASM数据库管理员组

asmoper

54328

ASM操作员组

asmadmin

54329

Oracle自动存储管理组

racdba

54330

RAC管理员

新建用户列表

用户名

UID

属组

Home目录

shell

备注

oracle

10000

dba,asmdba,oper

/home/oracle

bash

 

grid

10001

asmadmin,asmdba,asmoper,oper,dba

/home/grid

bash

 

整体数据库安装规划

规划内容

规划描述

CDB

Yes

PDB

pdkf01

内存规划

SGA_TARGET

processes

300

字符集

AL32UTF8

归档模式

非归档(手工调整归档模式)

两台虚拟机创建方式相同,只是IP和主机名不同,因此相关说明只截取一台, 节点一:

新建虚拟机向导:

 

 

 

 

 

 

 

 

 创建完毕后,需要增加第二块网卡: 

 

 注意:由于两台虚拟机操作系统一样,可以在安装完操作系统后通过Clone的方式创建第二台。

硬盘

内存

IP地址

用户名

密码

dkf19c01

80G

4GB

10.0.0.111

root

自定义

dkf19c02

80G

4GB

10.0.0.112

root

自定义

2.2.2安装注意事项

1.英文界面

 2.上海时区,关闭夏令时

3、本地存储划分

如下图:

 

 4.软件包:

 5. 启用网卡:

 6. 禁用KDUMP

 7. 点击 Begin Installation

 8. 设置root用户密码

 9. 安装完毕后,执行reboot重启操作

 10. 重启后,正常关闭主机,通过VMware的Clone功能创建第二台虚拟机。

 

至此,两台虚拟主机准备完毕,并已安装操作系统,接下来进行操作系统的配置

第一块网卡:

[root@dkf19c01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=static

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=ens33

UUID=9a65b8b6-244d-40fc-9c92-6da63f31a117

DEVICE=ens33

ONBOOT=yes

IPV6_PRIVACY=no

IPADDR=10.0.0.111

NETMASK=255.255.255.0

GATEWAY=10.0.0.1

DNS1=10.0.0.1

第二块网卡的配置:

[root@dkf19c01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=static

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=ens34

UUID=5a04719d-b6e8-48ec-862b-6d534ad45537

DEVICE=ens34

ONBOOT=yes

IPADDR=192.168.195.110

NETMASK=255.255.255.0

GATEWAY=192.168.195.1

DNS1=192.168.195.1

[root@dkf19c01 ~]#

按照如上方式调整第二台主机的网卡配置,IP地址配置如下:

[root@dkf19c02 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=static

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=ens33

UUID=9a65b8b6-244d-40fc-9c92-6da63f31a117

DEVICE=ens33

ONBOOT=yes

IPV6_PRIVACY=no

IPADDR=10.0.0.112

NETMASK=255.255.255.0

GATEWAY=10.0.0.1

DNS1=10.0.0.1

第二块网卡的配置:

[root@dkf19c02 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=static

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=ens34

UUID=5a04719d-b6e8-48ec-862b-6d534ad45537

DEVICE=ens34

ONBOOT=yes

IPADDR=192.168.195.112

NETMASK=255.255.255.0

GATEWAY=192.168.195.1

DNS1=192.168.195.1

[root@dkf19c02 ~]#

2.3.2 hosts解析文件配置

vi /etc/hosts 

增加内容如下:(两台同样配置)                  

#public ip

10.0.0.111  dkf19c01

10.0.0.112  dkf19c02

#private ip

192.168.195.111 dkf19c01-priv

192.168.195.112 dkf19c02-priv

#vip

10.0.0.115 dkf19c01-vip

10.0.0.116 dkf19c02-vip

#scanip

10.0.0.117 dkf19c-scan

10.0.0.118 dkf19c-scan

10.0.0.119 dkf19c-scan

配置完毕后,关闭两台主机。

​​​​​​​2.3.3 创建并挂载共享磁盘

在安装VMware软件的操作系统上,以管理员权限打开命令行工具cmd,进入到计划存放共享磁盘的目录,如d:\vm\sharedisk下,创建共享磁盘;

C:\"Program Files (x86)"\VMware\"VMware Workstation"\vmware-vdiskmanager -c -s 1GB -a lsilogic -t 4 shared-asm01.vmdk

C:\"Program Files (x86)"\VMware\"VMware Workstation"\vmware-vdiskmanager -c -s 1GB -a lsilogic -t 4 shared-asm02.vmdk

C:\"Program Files (x86)"\VMware\"VMware Workstation"\vmware-vdiskmanager -c -s 1GB -a lsilogic -t 4 shared-asm03.vmdk

C:\"Program Files (x86)"\VMware\"VMware Workstation"\vmware-vdiskmanager -c -s 10GB -a lsilogic -t 4 shared-asm04.vmdk

C:\"Program Files (x86)"\VMware\"VMware Workstation"\vmware-vdiskmanager -c -s 10GB -a lsilogic -t 4 shared-asm05.vmdk

创建完毕后,挂载到两台虚拟机上,两台主机同样操作:

以同样的方式挂载其它4块共享盘,挂载之后的情况如下:

分别进入到两台虚拟机的存放目录,D:\vm\dkf19c01 和 D:\vm\dkf19c02

编辑dkf19c019c-1.vmx,增加如下内容,注意:标红的内容在添加磁盘时已自动添加

disk.locking = "FALSE"

scsi1.sharedBus = "virtual"

scsi1.present = "TRUE"

scsi1.virtualDev = "lsilogic"

disk.EnableUUID = "TRUE"

scsi1:5.mode = "independent-persistent"

scsi1:5.deviceType = "disk"

scsi0:5.fileName = "D:\vm\sharedks\shared-asm05.vmdk"

scsi0:5.present = "TRUE"

scsi1:2.mode = "independent-persistent"

scsi1:2.deviceType = "disk"

scsi0:2.fileName = "D:\vm\sharedks\shared-asm02.vmdk"

scsi0:2.present = "TRUE"

scsi1:3.mode = "independent-persistent"

scsi1:3.deviceType = "disk"

scsi0:3.fileName = "D:\vm\sharedks\shared-asm03.vmdk"

scsi0:3.present = "TRUE"

scsi1:1.mode = "independent-persistent"

scsi1:1.deviceType = "disk"

scsi0:1.fileName = "D:\vm\sharedks\shared-asm01.vmdk"

scsi0:1.present = "TRUE"

scsi1:4.mode = "independent-persistent"

scsi1:4.deviceType = "disk"

scsi0:4.fileName = "D:\vm\sharedks\shared-asm04.vmdk"

scsi0:4.present = "TRUE"

编辑完毕后,启动两台虚拟机。

​​​​​​​2.3.4 关闭防火墙和selinux(两台主机上同样操作)

执行命令:

systemctl stop firewalld

systemctl disable firewalld

systemctl status firewalld

[root@dkf19c01 ~]# systemctl status firewalld -l

● firewalld.service - firewalld - dynamic firewall daemon

   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)

   Active: inactive (dead)

     Docs: man:firewalld(1)

[root@dkf19c01 ~]# systemctl disable firewalld

Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.

关闭selinux

vi /etc/selinux/config

SELINUX=disabled

然后执行:

[root@dkf19c01 ~]# setenfence 0

​​​​​​​2.3.5 调整network参数

当使用Oracle集群的时候,Zero Configuration Network一样可能会导致节点间的通信问题,所以也应该停掉Without zeroconf, a network administrator must set up network services, such as Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS), or configure each computer's network settings manually.在使用平常的网络设置方式的情况下是可以停掉Zero Conf的

[root@dkf19c01 ~]# echo "NOZEROCONF=yes"  >>/etc/sysconfig/network && cat /etc/sysconfig/network

# Created by anaconda

NOZEROCONF=yes

[root@dkf19c02 ~]# echo "NOZEROCONF=yes"  >>/etc/sysconfig/network && cat /etc/sysconfig/network

# Created by anaconda

NOZEROCONF=yes

​​​​​​​2.3.6 调整/dev/shm

[root@dkf19c01 ~]# df -h

Filesystem           Size  Used Avail Use% Mounted on

devtmpfs             2.0G     0  2.0G   0% /dev

tmpfs                2.0G     0  2.0G   0% /dev/shm

tmpfs                2.0G  8.8M  2.0G   1% /run

tmpfs                2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/mapper/ol-root   10G  2.1G  8.0G  21% /

/dev/mapper/ol-home   10G   33M   10G   1% /home

/dev/mapper/ol-u01    55G   33M   55G   1% /u01

/dev/sda1           1014M  169M  846M  17% /boot

tmpfs                393M     0  393M   0% /run/user/0

[root@dkf19c01 ~]#cp /etc/fstab /etc/fstab_`date +"%Y%m%d_%H%M%S"`

echo "tmpfs    /dev/shm    tmpfs    rw,exec,size=4G    0 0">>/etc/fstab

[root@dkf19c01 ~]# mount -o remount /dev/shm

[root@dkf19c01 ~]# df -h

Filesystem           Size  Used Avail Use% Mounted on

devtmpfs             2.0G     0  2.0G   0% /dev

tmpfs                4.0G     0  4.0G   0% /dev/shm

tmpfs                2.0G  8.8M  2.0G   1% /run

tmpfs                2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/mapper/ol-root   10G  2.1G  8.0G  21% /

/dev/mapper/ol-home   10G   33M   10G   1% /home

/dev/mapper/ol-u01    55G   33M   55G   1% /u01

/dev/sda1           1014M  169M  846M  17% /boot

tmpfs                393M     0  393M   0% /run/user/0

同样的方式调整第二节点。

[root@dkf19c02 ~]# df -h

Filesystem           Size  Used Avail Use% Mounted on

devtmpfs             2.0G     0  2.0G   0% /dev

tmpfs                2.0G     0  2.0G   0% /dev/shm

tmpfs                2.0G  8.8M  2.0G   1% /run

tmpfs                2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/mapper/ol-root   10G  2.1G  8.0G  21% /

/dev/mapper/ol-home   10G   33M   10G   1% /home

/dev/mapper/ol-u01    55G   33M   55G   1% /u01

/dev/sda1           1014M  169M  846M  17% /boot

tmpfs                393M     0  393M   0% /run/user/0

[root@dkf19c02 ~]#cp /etc/fstab /etc/fstab_`date +"%Y%m%d_%H%M%S"`

echo "tmpfs    /dev/shm    tmpfs    rw,exec,size=4G    0 0">>/etc/fstab

[root@dkf19c02 ~]# mount -o remount /dev/shm

[root@dkf19c02 ~]# df -h

Filesystem           Size  Used Avail Use% Mounted on

devtmpfs             2.0G     0  2.0G   0% /dev

tmpfs                4.0G     0  4.0G   0% /dev/shm

tmpfs                2.0G  8.8M  2.0G   1% /run

tmpfs                2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/mapper/ol-root   10G  2.1G  8.0G  21% /

/dev/mapper/ol-home   10G   33M   10G   1% /home

/dev/mapper/ol-u01    55G   33M   55G   1% /u01

/dev/sda1           1014M  169M  846M  17% /boot

tmpfs                393M     0  393M   0% /run/user/0

​​​​​​​2.3.7 关闭THP和numa

检查两节点的THP设置:

[root@dkf19c02 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled

方法一:需重启

[root@dkf19c01 ~]# sed -i 's/quiet/quiet transparent_hugepage=never numa=off/' /etc/default/grub

[root@dkf19c01 ~]# grep quiet  /etc/default/grub

[root@dkf19c01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

方法二:不重启,临时生效

[root@dkf19c01 ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled

[root@dkf19c01 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled

[root@dkf19c01 ~]# cd /etc/yum.repos.d

#将oracle-linux-ol7.repo复制:

[root@dkf19c01 ~]# cp oracle-linux-ol7.repo OS-CDROM.repo

[root@dkf19c01 ~]# vi OS-CDROM.repo

添加如下内容:

[CD-ROM]

name=OS-$releaseverCDROM

bAseurl=file:///tmp/cd-rom

gpgcheck=0

enabled=1

创建挂载点:

[root@dkf19c01 ~]# mkdir /tmp/cd-rom

挂载光盘方式:

[root@dkf19c01 ~]# mount /dev/cdrom /tmp/cd-rom

挂载ISO方式:

[root@dkf19c01 ~]# mount -o loop /tmp/Oracle-Linux-OS...iso /tmp/cd-rom

#测试

[root@dkf19c01 ~]# yum repolist

2.3.8.2 安装oracle所需软件包

yum install -y binutils

yum install -y compat-libcap1

yum install -y compat-libstdc++-33

yum install -y compat-libstdc++-33.i686

yum install -y gcc

yum install -y gcc-c++

yum install -y glibc

yum install -y glibc.i686

yum install -y glibc-devel

yum install -y glibc-devel.i686

yum install -y ksh

yum install -y libgcc

yum install -y libgcc.i686

yum install -y libstdc++

yum install -y libstdc++.i686

yum install -y libstdc++-devel

yum install -y libstdc++-devel.i686

yum install -y libaio

yum install -y libaio.i686

yum install -y libaio-devel

yum install -y libaio-devel.i686

yum install -y libXext

yum install -y libXext.i686

yum install -y libXtst

yum install -y libXtst.i686

yum install -y libX11

yum install -y libX11.i686

yum install -y libXau

yum install -y libXau.i686

yum install -y libxcb

yum install -y libxcb.i686

yum install -y libXi

yum install -y libXi.i686

yum install -y make

yum install -y sysstat

yum install -y unixODBC

yum install -y unixODBC-devel

yum install -y readline

yum install -y libtermcap-devel

yum install -y bc

yum install -y unzip

yum install -y compat-libstdc++

yum install -y elfutils-libelf

yum install -y elfutils-libelf-devel

yum install -y fontconfig-devel

yum install -y libXi

yum install -y libXtst

yum install -y libXrender

yum install -y libXrender-devel

yum install -y libgcc

yum install -y librdmacm-devel

yum install -y libstdc++

yum install -y libstdc++-devel

yum install -y net-tools

yum install -y nfs-utils

yum install -y python

yum install -y python-configshell

yum install -y python-rtslib

yum install -y python-six

yum install -y targetcli

yum install -y smartmontools

yum install -y nscd

​​​​​​​2.3.9 禁用NTP和chrony时间服务器

两台主机上备份NTP和chrony配置文件:

[root@dkf19c01 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak

[root@dkf19c01 ~]# mv /etc/chrony.conf /etc/chrony.conf.bak

​​​​​​​2.3.10 修改主机参数:

两台主机上同步调整:

[root@dkf19c01 ~]# cp /etc/sysctl.conf /etc/sysctl.conf.bak

memTotal=$(grep MemTotal /proc/meminfo | awk '{print $2}')

totalMemory=$((memTotal / 2048))

shmall=$((memTotal / 4))

if [ $shmall -lt 2097152 ]; then

  shmall=2097152

fi

shmmax=$((memTotal * 1024 - 1))

if [ "$shmmax" -lt 4294967295 ]; then

  shmmax=4294967295

fi

[root@dkf19c01 ~]# cat <<EOF>>/etc/sysctl.conf

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = $shmall

kernel.shmmax = $shmmax

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 16777216

net.core.rmem_max = 16777216

net.core.wmem_max = 16777216

net.core.wmem_default = 16777216

fs.aio-max-nr = 6194304

vm.dirty_ratio=20

vm.dirty_background_ratio=3

vm.dirty_writeback_centisecs=100

vm.dirty_expire_centisecs=500

vm.swappiness=10

vm.min_free_kbytes=524288

net.core.netdev_max_backlog = 30000

net.core.netdev_budget = 600

#vm.nr_hugepages =

net.ipv4.conf.all.rp_filter = 2

net.ipv4.conf.default.rp_filter = 2

net.ipv4.ipfrag_time = 60

net.ipv4.ipfrag_low_thresh=6291456

net.ipv4.ipfrag_high_thresh = 8388608

EOF

参数生效:

sysctl -p

​​​​​​​2.3.11 禁用不必要的服务

两台主机上同步调整:

systemctl disable accounts-daemon.service

systemctl disable atd.service

systemctl disable avahi-daemon.service

systemctl disable avahi-daemon.socket

systemctl disable bluetooth.service

systemctl disable brltty.service

systemctl disable chronyd.service

systemctl disable colord.service

systemctl disable cups.service 

systemctl disable debug-shell.service

systemctl disable firewalld.service

systemctl disable gdm.service

systemctl disable ksmtuned.service

systemctl disable ktune.service  

systemctl disable libstoragemgmt.service 

systemctl disable mcelog.service

systemctl disable ModemManager.service

systemctl disable ntpd.service

systemctl disable postfix.service

systemctl disable postfix.service 

systemctl disable rhsmcertd.service 

systemctl disable rngd.service

systemctl disable rpcbind.service

systemctl disable rtkit-daemon.service

systemctl disable tuned.service

systemctl disable upower.service

systemctl disable wpa_supplicant.service

--停止服务

systemctl stop accounts-daemon.service

systemctl stop atd.service

systemctl stop avahi-daemon.service

systemctl stop avahi-daemon.socket

systemctl stop bluetooth.service

systemctl stop brltty.service

systemctl stop chronyd.service

systemctl stop colord.service

systemctl stop cups.service 

systemctl stop debug-shell.service

systemctl stop firewalld.service

systemctl stop gdm.service

systemctl stop ksmtuned.service

systemctl stop ktune.service  

systemctl stop libstoragemgmt.service 

systemctl stop mcelog.service

systemctl stop ModemManager.service

systemctl stop ntpd.service

systemctl stop postfix.service

systemctl stop postfix.service 

systemctl stop rhsmcertd.service 

systemctl stop rngd.service

systemctl stop rpcbind.service

systemctl stop rtkit-daemon.service

systemctl stop tuned.service

systemctl stop upower.service

systemctl stop wpa_supplicant.service

​​​​​​​2.3.12 创建用户、目录

1、创建用户

groupadd -g 54321 oinstall

groupadd -g 54322 dba

groupadd -g 54323 oper

groupadd -g 54324 backupdba

groupadd -g 54325 dgdba

groupadd -g 54326 kmdba

groupadd -g 54327 asmdba

groupadd -g 54328 asmoper

groupadd -g 54329 asmadmin

groupadd -g 54330 racdba

useradd -g oinstall -G dba,oper,backupdba,dgdba,kmdba,asmdba,racdba -u 10000 oracle

useradd -g oinstall -G dba,asmdba,asmoper,asmadmin,racdba -u 10001 grid

echo "oracle" | passwd --stdin oracle

echo "grid" | passwd --stdin grid

2、创建目录

mkdir -p /u01/app/19.3.0/grid

mkdir -p /u01/app/grid

mkdir -p /u01/app/oracle/product/19.3.0/dbhome_1

chown -R grid:oinstall /u01

chown -R oracle:oinstall /u01/app/oracle

chmod -R 775 /u01

​​​​​​​2.3.13 环境变量

Grid用户环境变量

e cat >> /home/grid/.bash_profile << "EOF"

################add#########################

umask 022

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/19.3.0/grid

export TNS_ADMIN=$ORACLE_HOME/network/admin

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export ORACLE_SID=+ASM1

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH

alias dba='sqlplus / as sysasm'

export PS1="[\`whoami\`@\`hostname\`:"'$PWD]\$ '

EOF

oracle用户环境变量

cat >> /home/oracle/.bash_profile << "EOF"

################ add#########################

umask 022

export TMP=/tmp

export TMPDIR=$TMP

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/19.3.0/dbhome_1

export ORACLE_HOSTNAME=oracle19c-dkf19c01

export TNS_ADMIN=\$ORACLE_HOME/network/admin

export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:/lib:/usr/lib

export ORACLE_SID=orcl1

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH

alias dba='sqlplus / as sysdba'

export PS1="[\`whoami\`@\`hostname\`:"'$PWD]\$ '

EOF

​​​​​​​2.3.14 其他参数修改

1、vi /etc/pam.d/login 行末添加以下内容

cat >> /etc/pam.d/login <<EOF

session required pam_limits.so

EOF

2、修改/etc/profile文件

新增如下内容:

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

  if [ $SHELL = "/bin/ksh" ]; then

    ulimit -p 16384

    ulimit -n 65536

  else

    ulimit -u 16384 -n 65536

  fi

fi

​​​​​​​2.3.15 设置用户资源限制

首先编辑配置文件:

cat >> /etc/security/limits.conf <<EOF

grid  soft  nproc  2047

grid  hard  nproc  16384

grid  soft   nofile  1024

grid  hard  nofile  65536

grid  soft   stack  10240

grid  hard  stack  32768

oracle  soft  nproc  2047

oracle  hard  nproc  16384

oracle  soft  nofile  1024

oracle  hard  nofile  65536

oracle  soft  stack  10240

oracle  hard  stack  32768

oracle soft memlock 3145728

oracle hard memlock 3145728

EOF

​​​​​​​2.3.16 配置时间同步

在集群中的两个 Oracle RAC 节点上执行以下集群时间同步服务配置。

Oracle 提供了两种方法来实现时间同步:

一种方法是配置了网络时间协议 (NTP) 的操作系统,

另一种方法是新的 Oracle 集群时间同步服务 (CTSS)。Oracle 集群时间同步服务 (ctssd) 旨在为那些 Oracle RAC 数据库无法访问 NTP 服务的组织提供服务。

~ 配置集群时间同步服务 — (CTSS)

使用集群时间同步服务在集群中提供同步服务,需要卸载网络时间协议 (NTP) 及其配置。

要停用 NTP 服务,必须停止当前的 ntpd 服务,从初始化序列中禁用该服务,并删除 ntp.conf 文件。要在 Oracle Enterprise Linux 上完成这些步骤,以 root 用户身份在两个 Oracle RAC 节点上运行以下命令:

[root@racnode1 ~]# /sbin/service ntpd stop

[root@racnode1 ~]# chkconfig ntpd off

[root@racnode1 ~]# mv /etc/ntp.conf /etc/ntp.conf.original

~还要删除以下文件:

[root@racnode1 ~]# rm /var/run/ntpd.pid

此文件保存了 NTP 后台程序的 pid。

当安装程序发现 NTP 协议处于非活动状态时,安装集群时间同步服务将以活动模式自动进行安装并通过所有节点的时间。如果发现配置了 NTP,则以观察者模式启动集群时间同步服务,Oracle Clusterware 不会在集群中进行活动的时间同步。

​​​​​​​2.3.17 关闭两台主机,并开启共享目录,开启步骤如下

​​​​​​​2.3.18 使用udev配置共享磁盘

获取共享磁盘的UUID

/usr/lib/udev/scsi_id -g -u /dev/sdb

/usr/lib/udev/scsi_id -g -u /dev/sdc

/usr/lib/udev/scsi_id -g -u /dev/sdd

/usr/lib/udev/scsi_id -g -u /dev/sde

/usr/lib/udev/scsi_id -g -u /dev/sdf

[root@dkf19c01 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdb

36000c299c828142efb0230db9c7a9d93

[root@dkf19c01 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdc

36000c29b8c865854d447ef6c0c220137

[root@dkf19c01 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdd

36000c293b90e8742bb8cc98c32d77fc6

[root@dkf19c01 ~]# /usr/lib/udev/scsi_id -g -u /dev/sde

36000c296930fa70e2fd41c6f26af38ac

[root@dkf19c01 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdf

36000c290673aefb6ad44d24b1d986e92

[root@dkf19c01 ~]#

[root@dkf19c01 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29c5e48e6db24ed2afbb2d5ce0a", RUN+="/bin/sh -c '/usr/bin/mkdir /dev/asm; mknod /dev/asm/asm_ocr01 b 8 16; chown grid:asmadmin /dev/asm/asm_ocr01; chmod 0660 /dev/asm/asm_ocr01'"

KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c299c828142efb0230db9c7a9d93", RUN+="/bin/sh -c '/usr/bin/mkdir /dev/asm; mknod /dev/asm/asm_ocr02 b 8 32; chown grid:asmadmin /dev/asm/asm_ocr02; chmod 0660 /dev/asm/asm_ocr02'"

KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29b8c865854d447ef6c0c220137", RUN+="/bin/sh -c '/usr/bin/mkdir /dev/asm; mknod /dev/asm/asm_ocr03 b 8 48; chown grid:asmadmin /dev/asm/asm_ocr03; chmod 0660 /dev/asm/asm_ocr03'"

KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c296930fa70e2fd41c6f26af38ac", RUN+="/bin/sh -c '/usr/bin/mkdir /dev/asm; mknod /dev/asm/asm_data01 b 8 64; chown grid:asmadmin /dev/asm/asm_data01; chmod 0660 /dev/asm/asm_data01'"

KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c290673aefb6ad44d24b1d986e92", RUN+="/bin/sh -c '/usr/bin/mkdir /dev/asm; mknod /dev/asm/asm_data02 b 8 80; chown grid:asmadmin /dev/asm/asm_data02; chmod 0660 /dev/asm/asm_data02'"

重新解析磁盘:

[root@dkf19c01 ~]# /sbin/udevadm control --reload

[root@dkf19c01 ~]# /sbin/udevadm trigger --type=devices --action=change

检查磁盘绑定情况:

[root@dkf19c01 yum.repos.d]# ll /dev/asm*

total 0

brw-rw---- 1 grid asmadmin 8, 64 Feb 14 21:41 asm_data01

brw-rw---- 1 grid asmadmin 8, 80 Feb 14 21:40 asm_data02

brw-rw---- 1 grid asmadmin 8, 16 Feb 14 21:41 asm_ocr01

brw-rw---- 1 grid asmadmin 8, 32 Feb 14 21:41 asm_ocr02

brw-rw---- 1 grid asmadmin 8, 48 Feb 14 21:41 asm_ocr03

[root@dkf19c01 yum.repos.d]#

节点2同样的操作;

​​​​​​​2.3.19 安装GI软件

  1. 切换到grid用户,进入共享目录,解压grid软件包:

[grid@dkf19c01 Oracle]# cd /mnt/hgfs/Oracle

[grid@dkf19c01:/mnt/hgfs/Oracle]$ ls

Oracle_grid_V982068-01.zip

Oracle_database_1903-V982063-01.zip

[grid@dkf19c01:/mnt/hgfs/Oracle]$ unzip Oracle_grid_V982068-01.zip -d $ORACLE_HOME

  1. 配置图形界面,安装X11图形工具;

建议使用MobaXterm终端进行操作,并安装X11-Xorg包

安装命令:[root@dkf19c02 yum.repos.d]# yum install xorgs*

rpm -qa cvuqdisk  两个节点执行,看是否有安装,若无安装,则:

2、安装CVU包:两个节点都执行,root用户

[root@dkf19c01 ~]# cd /u01/app/19.3.0/grid/cv/rpm

[root@dkf19c01 rpm]# ls

cvuqdisk-1.0.10-1.rpm

[root@dkf19c01 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm

Preparing...                          ################################# [100%]

Using default group oinstall to install package

Updating / installing...

   1:cvuqdisk-1.0.10-1                ################################# [100%]

[root@dkf19c01 rpm]#

节点2:

[root@dkf19c02 ~]# cd /u01/app/19.3.0/grid/cv/rpm

[root@dkf19c02 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm

Preparing...                          ################################# [100%]

Using default group oinstall to install package

Updating / installing...

   1:cvuqdisk-1.0.10-1                ################################# [100%]

[root@dkf19c02 rpm]#

[root@dkf19c01 rpm]# su - grid

[grid@dkf19c01 ~]$ export CVUQDIISK_GRP=oinstall

[grid@dkf19c01 ~]$ cd /u01/app/19.3.0/grid/

[grid@dkf19c01 grid]$ ./runcluvfy.sh stage -pre crsinst -n dkf19c01, dkf19c02 -verbose

3.2 Grid安装

3.2.1安装执行

[grid@dkf19c01 ~]$ cd /u01/app/19.3.0/grid/

[grid@dkf19c01 ~]$ ./gridsetup.sh

 

 

 

增加第二个节点:

配置两节点间的grid用户互信:

 

 

 

创建OCR磁盘组: 

 

 

 

 

 

 

 

 

 

 两节点分别执行两个脚本:

 

 节点1执行:

[root@dkf19c01 rpm]# /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[root@dkf19c01 rpm]#

节点2执行:

[root@dkf19c02 rpm]# /u01/app/oraInventory/orainstRoot.sh

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[root@dkf19c02 rpm]#

节点1第二个脚本:

[root@dkf19c01 rpm]# /u01/app/19.3.0/grid/root.sh

Performing root user operation.

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/dkf19c01/crsconfig/rootcrs_dkf19c01_2023-02-10_09-59-01PM.log

2023/02/10 21:59:10 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.

2023/02/10 21:59:10 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.

2023/02/10 21:59:10 CLSRSC-363: User ignored prerequisites during installation

2023/02/10 21:59:10 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.

2023/02/10 21:59:12 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.

2023/02/10 21:59:13 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.

2023/02/10 21:59:13 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.

2023/02/10 21:59:13 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.

2023/02/10 21:59:30 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.

2023/02/10 21:59:33 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.

2023/02/10 21:59:39 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2023/02/10 21:59:47 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.

2023/02/10 21:59:48 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.

2023/02/10 21:59:52 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.

2023/02/10 21:59:52 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

2023/02/10 22:00:14 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.

2023/02/10 22:00:19 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.

2023/02/10 22:00:24 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.

2023/02/10 22:00:28 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

ASM has been created and started successfully.

[DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-230210PM100058.log for details.

2023/02/10 22:01:54 CLSRSC-482: Running command: '/u01/app/19.3.0/grid/bin/ocrconfig -upgrade grid oinstall'

CRS-4256: Updating the profile

Successful addition of voting disk 6312bdb7b5904f5fbfc453f557492888.

Successful addition of voting disk 451040038e734faebfbff20dbf027e21.

Successful addition of voting disk 7a8cbd0838244f73bfdd80a32c6f1599.

Successfully replaced voting disk group with +OCR.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   6312bdb7b5904f5fbfc453f557492888 (/dev/asm/asm_ocr03) [OCR]

 2. ONLINE   451040038e734faebfbff20dbf027e21 (/dev/asm/asm_ocr02) [OCR]

 3. ONLINE   7a8cbd0838244f73bfdd80a32c6f1599 (/dev/asm/asm_ocr01) [OCR]

Located 3 voting disk(s).

2023/02/10 22:03:20 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.

2023/02/10 22:04:55 CLSRSC-343: Successfully started Oracle Clusterware stack

2023/02/10 22:04:56 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.

2023/02/10 22:06:25 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.

2023/02/10 22:06:58 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@dkf19c01 rpm]#

节点2第二个脚本:

[root@dkf19c02 rpm]# /u01/app/19.3.0/grid/root.sh

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

Performing root user operation.

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/dkf19c02/crsconfig/rootcrs_dkf19c02_2023-02-10_10-11-01PM.log

2023/02/10 22:11:07 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.

2023/02/10 22:11:07 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.

2023/02/10 22:11:07 CLSRSC-363: User ignored prerequisites during installation

2023/02/10 22:11:08 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.

2023/02/10 22:11:09 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.

2023/02/10 22:11:09 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.

2023/02/10 22:11:09 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.

2023/02/10 22:11:10 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.

2023/02/10 22:11:11 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.

2023/02/10 22:11:11 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.

2023/02/10 22:11:20 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.

2023/02/10 22:11:20 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.

2023/02/10 22:11:21 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.

2023/02/10 22:11:22 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

2023/02/10 22:11:34 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2023/02/10 22:11:43 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.

2023/02/10 22:11:44 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.

2023/02/10 22:11:46 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.

2023/02/10 22:11:47 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

2023/02/10 22:11:55 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.

2023/02/10 22:12:44 CLSRSC-343: Successfully started Oracle Clusterware stack

2023/02/10 22:12:44 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.

2023/02/10 22:12:57 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.

2023/02/10 22:13:03 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@dkf19c02 rpm]#

继续执行安装:

 

 检查不通过,可忽略;

 

注意:如果第一次安装失败,第二次安装grid时,磁盘已有信息,故需要重新擦除,才能变为可选状态:

[root@dkf19c01 software]# dd if=/dev/zero of=/dev/asm-ocr1 bs=1M count=10

10+0 records in

10+0 records out

10485760 bytes (10 MB) copied, 0.0051855 s, 2.0 GB/s

[root@dkf19c01 software]# dd if=/dev/zero of=/dev/asm-ocr2 bs=1M count=10

10+0 records in

10+0 records out

10485760 bytes (10 MB) copied, 0.00490229 s, 2.1 GB/s

[root@dkf19c01 software]# dd if=/dev/zero of=/dev/asm-ocr3 bs=1M count=10

10+0 records in

10+0 records out

10485760 bytes (10 MB) copied, 0.00451599 s, 2.3 GB/s

[root@dkf19c01 software]# dd if=/dev/zero of=/dev/asm-data1 bs=1M count=10

10+0 records in

10+0 records out

10485760 bytes (10 MB) copied, 0.00490229 s, 2.1 GB/s

[root@dkf19c01 software]# dd if=/dev/zero of=/dev/asm-data2 bs=1M count=10

10+0 records in

10+0 records out

10485760 bytes (10 MB) copied, 0.00490229 s, 2.2 GB/s

​​​​​​​3.2.2 集群检测

[grid@dkf19c01:/home/grid]$ crsctl stat res -t

--------------------------------------------------------------------------------

Name           Target  State        Server                   State details      

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

               ONLINE  ONLINE       dkf19c01                 STABLE

               ONLINE  ONLINE       dkf19c02                 STABLE

ora.chad

               ONLINE  ONLINE       dkf19c01                 STABLE

               ONLINE  ONLINE       dkf19c02                 STABLE

ora.net1.network

               ONLINE  ONLINE       dkf19c01                 STABLE

               ONLINE  ONLINE       dkf19c02                 STABLE

ora.ons

               ONLINE  ONLINE       dkf19c01                 STABLE

               ONLINE  ONLINE       dkf19c02                 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)

      1        ONLINE  ONLINE       dkf19c01                 STABLE

      2        ONLINE  ONLINE       dkf19c02                 STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       dkf19c02                 STABLE

ora.LISTENER_SCAN2.lsnr

      1        ONLINE  ONLINE       dkf19c01                 STABLE

ora.LISTENER_SCAN3.lsnr

      1        ONLINE  ONLINE       dkf19c01                 STABLE

ora.OCR.dg(ora.asmgroup)

      1        ONLINE  ONLINE       dkf19c01                 STABLE

      2        ONLINE  ONLINE       dkf19c02                 STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.asm(ora.asmgroup)

      1        ONLINE  ONLINE       dkf19c01                 Started,STABLE

      2        ONLINE  ONLINE       dkf19c02                 Started,STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.asmnet1.asmnetwork(ora.asmgroup)

      1        ONLINE  ONLINE       dkf19c01                 STABLE

      2        ONLINE  ONLINE       dkf19c02                 STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.cvu

      1        ONLINE  ONLINE       dkf19c01                 STABLE

ora.dkf19c01.vip

      1        ONLINE  ONLINE       dkf19c01                 STABLE

ora.dkf19c02.vip

      1        ONLINE  ONLINE       dkf19c02                 STABLE

ora.qosmserver

      1        ONLINE  ONLINE       dkf19c01                 STABLE

ora.scan1.vip

      1        ONLINE  ONLINE       dkf19c02                 STABLE

ora.scan2.vip

      1        ONLINE  ONLINE       dkf19c01                 STABLE

ora.scan3.vip

      1        ONLINE  ONLINE       dkf19c01                 STABLE

--------------------------------------------------------------------------------

[grid@dkf19c01:/home/grid]$

[grid@dkf19c01:/home/grid]$

                                        

[grid@dkf19c02:/home/grid]$ crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[grid@dkf19c02:/home/grid]$

[grid@dkf19c02:/home/grid]$ crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   6312bdb7b5904f5fbfc453f557492888 (/dev/asm/asm_ocr03) [OCR]

 2. ONLINE   451040038e734faebfbff20dbf027e21 (/dev/asm/asm_ocr02) [OCR]

 3. ONLINE   7a8cbd0838244f73bfdd80a32c6f1599 (/dev/asm/asm_ocr01) [OCR]

Located 3 voting disk(s).

[grid@dkf19c02:/home/grid]$

资源组状态:

[grid@dkf19c02:/home/grid]$ crsctl status resource -t

--------------------------------------------------------------------------------

Name           Target  State        Server                   State details      

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

               ONLINE  ONLINE       dkf19c01                 STABLE

               ONLINE  ONLINE       dkf19c02                 STABLE

ora.chad

               ONLINE  ONLINE       dkf19c01                 STABLE

               ONLINE  ONLINE       dkf19c02                 STABLE

ora.net1.network

               ONLINE  ONLINE       dkf19c01                 STABLE

               ONLINE  ONLINE       dkf19c02                 STABLE

ora.ons

               ONLINE  ONLINE       dkf19c01                 STABLE

               ONLINE  ONLINE       dkf19c02                 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)

      1        ONLINE  ONLINE       dkf19c01                 STABLE

      2        ONLINE  ONLINE       dkf19c02                 STABLE

      3        ONLINE  OFFLINE                               STABLE

ora.DATA.dg(ora.asmgroup)

      1        ONLINE  ONLINE       dkf19c01                 STABLE

      2        ONLINE  ONLINE       dkf19c02                 STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       dkf19c01                 STABLE

ora.LISTENER_SCAN2.lsnr

      1        ONLINE  ONLINE       dkf19c02                 STABLE

ora.LISTENER_SCAN3.lsnr

      1        ONLINE  ONLINE       dkf19c02                 STABLE

ora.OCR.dg(ora.asmgroup)

      1        ONLINE  ONLINE       dkf19c01                 STABLE

      2        ONLINE  ONLINE       dkf19c02                 STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.asm(ora.asmgroup)

      1        ONLINE  ONLINE       dkf19c01                 Started,STABLE

      2        ONLINE  ONLINE       dkf19c02                 Started,STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.asmnet1.asmnetwork(ora.asmgroup)

      1        ONLINE  ONLINE       dkf19c01                 STABLE

      2        ONLINE  ONLINE       dkf19c02                 STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.cvu

      1        ONLINE  ONLINE       dkf19c02                 STABLE

ora.dkf19c.db

      1        ONLINE  ONLINE       dkf19c01                 Open,HOME=/u01/app/o

                                                             racle/product/19.3.0

                                                             /dbhome_1,STABLE

      2        ONLINE  ONLINE       dkf19c02                 Open,HOME=/u01/app/o

                                                             racle/product/19.3.0

                                                             /dbhome_1,STABLE

ora.dkf19c01.vip

      1        ONLINE  ONLINE       dkf19c01                 STABLE

ora.dkf19c02.vip

      1        ONLINE  ONLINE       dkf19c02                 STABLE

ora.qosmserver

      1        ONLINE  ONLINE       dkf19c02                 STABLE

ora.scan1.vip

      1        ONLINE  ONLINE       dkf19c01                 STABLE

ora.scan2.vip

      1        ONLINE  ONLINE       dkf19c02                 STABLE

ora.scan3.vip

      1        ONLINE  ONLINE       dkf19c02                 STABLE

--------------------------------------------------------------------------------

[grid@dkf19c02:/home/grid]$

检查集群节点:

[grid@dkf19c01 ~]$ olsnodes -s

dkf19c01    Active

dkf19c02    Active

[grid@dkf19c01 ~]$ crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

检查时间服务器:

[grid@dkf19c02:/home/grid]$ crsctl check ctss

CRS-4701: The Cluster Time Synchronization Service is in Active mode.

以grid用户在node1节点登录

在安装Clusterware 的时候,会创建ASM 实例,但是它只创建了一个CRS 组来安装OCR 和Voting Disk。 在我们继续安装Oracle 数据库之前,我们需要创建一个DATA的ASM 磁盘组来存放数据文件。

创建过程很简单。 运行asmca(ASM Configuration Assistant)命令就可以弹出创建窗口。 在窗口中创建完DATA 和 FRA 组后,退出窗口即可。

在grid 用户下,执行 asmca,启动 asm 磁盘组创建向导

点击“创建”按钮,在弹出的创建界面中填写磁盘组名称,选择外边存储方,并勾选成员,选择完毕后点击 ok;

  1. grid用户,执行asmca

 

 

使用oracle用户登录系统:

[oracle@dkf19c01:/mnt/hgfs/Oracle]$ unzip Oracle_database_1903-V982063-01.zip -d $ORACLE_HOME

​​​​​​​5.2 图形界面安装

[oracle@dkf19c01:/mnt/hgfs/Oracle]$ cd $ORACLE_HOME/

[oracle@dkf19c01:/u01/app/oracle/product/19.3.0/dbhome_1]$ ./runInstaller

 

 

 

 

 

完成后在两节点上执行脚本:

[root@dkf19c01 ~]# /u01/app/oracle/product/19.3.0/dbhome_1/root.sh

Performing root user operation for Oracle 19c

The following environment variables are set as:

    ORACLE_OWNER= oracle

    ORACLE_HOME=  /u01/app/oracle/product/19.3.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

完成安装。

oracle用户执行:dbca

 

 

 

 

 

 

 

 

 

 

登录数据库验证:

[oracle@dkf19c01:/home/oracle]$

[oracle@dkf19c01:/home/oracle]$ dba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Feb 14 22:17:53 2023

Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.3.0.0.0

SQL>

SQL> show pbds;

SP2-0158: unknown SHOW option "pbds"

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       READ ONLY  NO

         3 PDKF01                         READ WRITE NO

SQL>

======================================================================

  • 4
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
RAC是一个完整的集群应用环境,它不仅实现了集群的功能,而且提供了运行在集群之上的应用程序,即Oracle数据库。无论与普通的集群相比,还是与普通的oracle数据库相比,RAC都有一些独特之处。 RAC由至少两个节点组成,节点之间通过公共网络和私有网络连接,其中私有网络的功能是实现节点之间的通信,而公共网络的功能是提供用户的访问。在每个节点上分别运行一个Oracle数据库实例和一个监听器,分别监听一个IP地址上的用户请求,这个地址称为VIP(Virtual IP)。用户可以向任何一个VIP所在的数据库服务器发出请求,通过任何一个数据库实例访问数据库。Clusterware负责监视每个节点的状态,如果发现某个节点出现故障,便把这个节点上的数据库实例和它所对应的VIP以及其他资源切换到另外一个节点上,这样可以保证用户仍然可通过这个VIP访问数据库。 在普通的Oracle数据库中,一个数据库实例只能访问一个数据库,而一个数据库只能被一个数据库实例打开。在RAC环境中,多个数据库实例同时访问同一个数据库,每个数据库实例分别在不同的节点上运行,而数据库存放在共享的存储设备上。 通过RAC,不仅可以实现数据库的并发访问,而且可以实现用户访问的负载均衡。用户可以通过任何一个数据库实例访问数据库,实例之间通过内部通信来保证事务的一致性。例如,当用户在一个实例修改数据时,需要对数据加锁。当另一个用户在其他实例中修改同样的数据时,便需要等待锁的释放。当前一个用户提交事务时,后一个用户立即可以得到修改之后的数据。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

kuifeng.dong

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值