oracle 19c rac 修改私网、公网网络IP

oracle 19c RAC 修改私网/公网/虚拟IP/scanip 的网段和ip

版本:oracle 19C

IP规划

解析名

类型

Old IP

New ip

Old ifname

New ifname

node1

Public

192.168.100.10

192.168.200.10

bond0

bond0

node2

Public

192.168.100.20

192.168.200.20

bond0

bond0

node1-vip

VIP

192.168.100.11

192.168.200.11

bond0

bond0

node2-vip

VIP

192.168.100.21

192.168.200.21

bond0

bond0

node1-priv

Private

192.168.10.10

192.168.20.10

bond1

bond1

node2-priv

Private

192.168.10.20

192.168.20.20

bond1

bond1

racdb-scan

Scan

192.168.100.30

192.168.200.30

bond0

bond0

  1. 修改私网IP

19C 集群私网涉及到集群间通信和ASM网络。主要涉及集群资源。

操作主要在grid用户下,集群中一台主机上执行。

1. 当前环境状态检查

(1)当前集群状态检查,确保集群中所有节点都已正常启动

[grid@node1 ~]$ crsctl status res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.chad

ONLINE ONLINE node1 STABLE

ONLINE ONLINE node2 STABLE

ora.net1.network

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.ons

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.proxy_advm

OFFLINE OFFLINE node1 STABLE

OFFLINE OFFLINE node2 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.ARCH.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.DATA.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node1 STABLE

ora.OCR.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.asm(ora.asmgroup)

1 ONLINE ONLINE node1 Started,STABLE

2 ONLINE ONLINE node2 Started,STABLE

ora.asmnet1.asmnetwork(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.cvu

1 ONLINE ONLINE node1 STABLE

ora.racdb.db

1 OFFLINE OFFLINE Instance Shutdown,ST

ABLE

2 OFFLINE OFFLINE Instance Shutdown,ST

ABLE

ora.node1.vip

1 ONLINE ONLINE node1 STABLE

ora.node2.vip

1 ONLINE INTERMEDIATE node1 FAILED OVER,STABLE

ora.qosmserver

1 ONLINE ONLINE node1 STABLE

ora.scan1.vip

1 ONLINE ONLINE node1 STABLE

  1. 查看asm是否启用flex

[grid@node1 ~]$ asmcmd

ASMCMD> showclustermode

ASM cluster : Flex mode enabled - Direct Storage Access

ASMCMD> showclusterstate

Normal

  1. 备份GPNP配置文件

私网信息不加存在OCR中还存放到GPNP的profile.xml配置文件中,可以预先进行备份

cd $ORACLE_HOME/gpnp/node1/profiles/peer

[grid@node1 peer]$ cp profile.xml profile.xml230207

  1. 查看当前配置的网络

[grid@node1 peer]$ oifcfg getif

bond0 192.168.200.0 global public

bond1 192.168.20.0 global cluster_interconnect,asm

  1. 新添加一个集群私网信息

[grid@node1 peer]$oifcfg setif -global bond1/192.168.200.0:cluster_interconnect,asm

[grid@node1 peer]$ oifcfg getif

bond0 192.168.200.0 global public

bond1 192.168.20.0 global cluster_interconnect,asm

bond1 192.168.10.0 global cluster_interconnect,asm

  1. 修改ASMLISTENER

asmlistener被用作私有网络,如果对其修改则会影响 ASMLISTENER。需要添加一个新的 ASMLISTENER 及新的网络配置。如果 ASM 的子网网络没有改变则跳过这一步

(1)查看当前ASMLISTENER配置

[grid@node1 peer]$ srvctl config asm

ASM home: <CRS home>

Password file: +OCR/orapwASM

Backup of Password file: +OCR/orapwASM_backup

ASM listener: LISTENER

ASM instance count: 2

Cluster ASM listener: ASMNET1LSNR_ASM

[grid@node1 peer]$ srvctl config listener -asmlistener

Name: ASMNET1LSNR_ASM

Type: ASM Listener

Owner: grid

Subnet: 192.168.10.0

Home: <CRS home>

End points: TCP:1525

Listener is enabled.

Listener is individually enabled on nodes:

Listener is individually disabled on nodes:

(2)添加一个新的ASM监听

srvctl add listener -asmlistener -l ASMNET2LSNR_ASM -subnet 192.168.20.0

(3)添加新ASM监听后配置

[grid@node1 peer]$ srvctl config listener -asmlistener

Name: ASMNET1LSNR_ASM

Type: ASM Listener

Owner: grid

Subnet: 192.168.10.0

Home: <CRS home>

End points: TCP:1525

Listener is enabled.

Listener is individually enabled on nodes:

Listener is individually disabled on nodes:

Name: ASMNET2LSNR_ASM

Type: ASM Listener

Owner: grid

Subnet: 192.168.20.0

Home: <CRS home>

End points: TCP:1526

Listener is enabled.

Listener is individually enabled on nodes:

Listener is individually disabled on nodes:

(4)删除原来的asm监听

srvctl update listener -listener ASMNET1LSNR_ASM -asm -remove -force

(5)查看修改后配置

[grid@node1 peer]$ srvctl config listener -asmlistener

Name: ASMNET2LSNR_ASM

Type: ASM Listener

Owner: grid

Subnet: 192.168.20.0

Home: <CRS home>

End points: TCP:1526

Listener is enabled.

Listener is individually enabled on nodes:

Listener is individually disabled on nodes:

[grid@node1 peer]$ srvctl config asm

ASM home: <CRS home>

Password file: +OCR/orapwASM

Backup of Password file: +OCR/orapwASM_backup

ASM listener: LISTENER

ASM instance count: 2

Cluster ASM listener: ASMNET2LSNR_ASM

  1. 两个节点关闭和禁用crs

节点一:

[root@node1 ~]# cd /app/19.0.0/grid/bin/

[root@node1 bin]# ./crsctl stop crs

[root@node1 bin]# ./crsctl disable crs

CRS-4621: Oracle High Availability Services autostart is disabled.

节点二:

[root@node2 bin]# ./crsctl stop crs

[root@node2 bin]# ./crsctl disable crs

CRS-4621: Oracle High Availability Services autostart is disabled.

  1. 所有节点操作系统层修改私网IP,/etc/hosts

节点一:

[root@node1 network-scripts]# vi ifcfg-bond1

BONDING_OPTS=mode=active-backup

TYPE=Bond

BONDING_MASTER=yes

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=static

IPADDR=192.168.20.10 --->192.168.10.10 改成192.168.20.10

NETMASK=255.255.255.0

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

NAME=bond1

DEVICE=bond1

ONBOOT=yes

nucli c reload

nucli d reapply bond1

vi /etc/hosts

#public ip

192.168.100.10 node1

192.168.100.20 node2

#vip

192.168.100.11 node1-vip

192.168.100.21 node2-vip

#private ip

192.168.20.10 node1-priv

192.168.20.20 node2-priv

#scan ip

192.168.100.30 racdb-scan

节点二操作同节点一,注意IP。

  1. 两个节点激活、启动crs

节点一:

[root@node1 ~]# cd /app/19.0.0/grid/bin/

[root@node1 bin]# ./crsctl enable crs

CRS-4621: Oracle High Availability Services autostart is enabled.

[root@node1 bin]# ./crsctl start crs

节点二:

[root@node2 bin]# ./crsctl disable crs

CRS-4621: Oracle High Availability Services autostart is disabled.

[root@node2 bin]# ./crsctl stop crs

  1. 删除旧网络接口信息

[grid@node1 ~]$ oifcfg getif

bond0 192.168.100.0 global public

bond1 192.168.20.0 global cluster_interconnect,asm

bond1 192.168.10.0 global cluster_interconnect,asm

[grid@node1 ~]$ oifcfg delif -global bond1/192.168.10.0

[grid@node1 ~]$ oifcfg getif

bond0 192.168.200.0 global public

bond1 192.168.200.0 global cluster_interconnect,asm

  1. 更新 ora.asmnet1.asmnetwork 资源

Root用户执行

#移除原资源

[root@node1 bin]# ./srvctl remove asmnetwork -netnum 1 -force

#增加新资源

[root@node1 bin]# ./srvctl add asmnetwork -netnum 1 -subnet 192.168.20.0

#启动新资源

[root@node1 bin]#./srvctl start asmnetwork -netnum 1

  1. 查看当前集群状态

[root@node1 bin]# ./crsctl status res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.chad

ONLINE ONLINE node1 STABLE

ONLINE ONLINE node2 STABLE

ora.net1.network

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.ons

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.proxy_advm

OFFLINE OFFLINE node1 STABLE

OFFLINE OFFLINE node2 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.ARCH.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.ASMNET2LSNR_ASM.lsnr(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.DATA.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node1 STABLE

ora.OCR.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.asm(ora.asmgroup)

1 ONLINE ONLINE node1 Started,STABLE

2 ONLINE ONLINE node2 Started,STABLE

ora.asmnet1.asmnetwork(ora.asmgroup)

1 ONLINE ONLINE STABLE

2 ONLINE ONLINE STABLE

ora.cvu

1 ONLINE ONLINE node1 STABLE

ora.racdb.db

1 OFFLINE OFFLINE Instance Shutdown,ST

ABLE

2 OFFLINE OFFLINE Instance Shutdown,ST

ABLE

ora.node1.vip

1 ONLINE ONLINE node1 STABLE

ora.node2.vip

1 ONLINE INTERMEDIATE node1 FAILED OVER,STABLE

ora.qosmserver

1 ONLINE ONLINE node1 STABLE

ora.scan1.vip

1 ONLINE ONLINE node1 STABLE

  1. 两个节点重启服务器,测试集群是否能正常启动

  1. 修改公网IP、VIP、SCAN IP

  1. 查看当前集群状态

[grid@node1 ~]$ crsctl status res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.chad

ONLINE ONLINE node1 STABLE

ONLINE ONLINE node2 STABLE

ora.net1.network

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.ons

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.proxy_advm

OFFLINE OFFLINE node1 STABLE

OFFLINE OFFLINE node2 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.ARCH.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.ASMNET2LSNR_ASM.lsnr(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.DATA.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node1 shuSTABLE

ora.OCR.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.asm(ora.asmgroup)

1 ONLINE ONLINE node1 Started,STABLE

2 ONLINE ONLINE node2 Started,STABLE

ora.asmnet1.asmnetwork(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.cvu

1 ONLINE ONLINE node1 STABLE

ora.racdb.db

1 OFFLINE OFFLINE Instance Shutdown,ST

ABLE

2 OFFLINE OFFLINE Instance Shutdown,ST

ABLE

ora.node1.vip

1 ONLINE ONLINE node1 STABLE

ora.node2.vip

1 ONLINE INTERMEDIATE node1 FAILED OVER,STABLE

ora.qosmserver

1 ONLINE ONLINE node1 STABLE

ora.scan1.vip

1 ONLINE ONLINE node1 STABLE

  1. 停用监听

#用grid用户在其中一个节点执行

[grid@node1 ~]$srvctl stop listener

[grid@node1 ~]$srvctl disable listener

  1. 停止数据库

#用oracle用户在其中一个节点执行

[oracle@node1 ~]$srvctl stop database -d racdb -o immediate

[oracle@node1 ~]$srvctl disable database -d racdb

  1. 停止VIP服务

#使用root用户在其中一个节点执行禁止VIP

cd /app/19.0.0/grid/bin/

[root@node1 bin]#./srvctl disable vip -i node1-vip

[root@node1 bin]#./srvctl disable vip -i node2-vip

#用grid用户在其中一个节点停止VIP

[grid@node1 ~]$ srvctl stop vip -n node1

[grid@node1 ~]$ srvctl stop vip -n node2

  1. 停止scan和scan_listener

#用grid用户在其中一个节点停止禁用scan_listener

[grid@node1 ~]$ srvctl stop scan_listener

[grid@node1 ~]$ srvctl disable scan_listener

#用root用户在其中一个节点停止禁用scan

cd /app/19.0.0/grid/bin/

[root@node1 bin]#./srvctl stop scan

[root@node1 bin]#./srvctl disable scan

  1. 停止crs

#用root用户在两个节点分别停止crs

cd /app/19.0.0/grid/bin/

[root@node1 bin]#./crsctl stop crs

[root@node2 bin]#./crsctl stop crs

  1. 修改/etc/hosts和网卡IP

#两个节点分别执行修改

vi /etc/hosts

#public ip

192.168.200.10 node1 --> 192.168.100.10改成192.168.200.20

192.168.200.20 node2

#vip

192.168.200.11 node1-vip

192.168.200.21 node2-vip

#private ip

192.168.20.10 node1-priv

192.168.20.20 node2-priv

#scan ip

192.168.200.30 racdb-scan

修改网卡IP此处略,参看修改网卡私网IP

  1. 启动集群

#用root用户在两个节点分别启动crs

cd /app/19.0.0/grid/bin/

[root@node1 bin]#./crsctl start crs

[root@node2 bin]#./crsctl start crs

  1. 修改公网IP信息

#删除原公网信息

[grid@node1 ~]$ oifcfg delif -global bond0/192.168.100.0

[grid@node1 ~]$ oifcfg getif

bond1 192.168.20.0 global cluster_interconnect,asm

#增加新公网信息

[grid@node1 ~]$ oifcfg setif -global bond0/192.168.200.0:public

[grid@node1 ~]$ oifcfg getif

bond1 192.168.20.0 global cluster_interconnect,asm

bond0 192.168.200.0 global public

  1. 修改VIP 信息

#查看原vip 信息

[grid@node1 ~]$ srvctl config nodeapps -a

Network 1 exists

Subnet IPv4: 192.168.100.0/255.255.255.0/bond0, static

Subnet IPv6:

Ping Targets:

Network is enabled

Network is individually enabled on nodes:

Network is individually disabled on nodes:

VIP exists: network number 1, hosting node node1

VIP Name: node1-vip

VIP IPv4 Address: 192.168.100.11

VIP IPv6 Address:

VIP is enabled.

VIP is individually enabled on nodes:

VIP is individually disabled on nodes:

VIP exists: network number 1, hosting node node2

VIP Name: node2-vip

VIP IPv4 Address: 192.168.100.21

VIP IPv6 Address:

VIP is enabled.

VIP is individually enabled on nodes:

VIP is individually disabled on nodes:

#用root用户在其中一个节点执行修改vip信息

cd /app/19.0.0/grid/bin/

[root@node1 bin]# ./srvctl modify nodeapps -n node1 -A 192.168.200.11/255.255.255.0/bond0

[root@node1 bin]# ./srvctl modify nodeapps -n node2 -A 192.168.200.21/255.255.255.0/bond0

#验证更改后的配置

[root@node1 bin]# ./srvctl config nodeapps -a

网络1存在

子网 IPv4: 192.168.200.0/255.255.255.0/bond0, static

子网 IPv6:

试通目标:

网络已启用

网络已在以下节点上分别启用:

网络已在以下节点上分别禁用:

VIP 存在: 网络编号 1, 托管节点 node1

VIP 名称: node1-vip

VIP IPv4 地址: 192.168.200.11

VIP IPv6 地址:

VIP 已启用。

VIP 已在以下节点上分别启用:

VIP 已在以下节点上分别禁用:

VIP 存在: 网络编号 1, 托管节点 node2

VIP 名称: node2-vip

VIP IPv4 地址: 192.168.200.21

VIP IPv6 地址:

VIP 已启用。

VIP 已在以下节点上分别启用:

VIP 已在以下节点上分别禁用:

  1. 修改集群scanip

#使用root用户在其中一个节点操作修改scanip

#查看原来状态

[grid@node1 ~]$ srvctl config scan

SCAN name: racdb-scan, Network: 1

Subnet IPv4: 192.168.100.0/255.255.255.0/bond0, static

Subnet IPv6:

SCAN 1 IPv4 VIP: 192.168.100.30

SCAN VIP is enabled.

#修改scanip

cd /app/19.0.0/grid/bin/

[root@node1 bin]# ./srvctl modify scan -n 192.168.200.30

  1. 启动集群监听、vip、scan和数据库

#用root用户在其中一个节点执行

[root@node1 ~]# cd /app/19.0.0/grid/bin/

[root@node1 bin]# ./srvctl enable listener

[root@node1 bin]# ./srvctl enable vip -i node1-vip

[root@node1 bin]# ./srvctl enable vip -i node2-vip

[root@node1 bin]# ./srvctl enable scan_listener

[root@node1 bin]# ./srvctl enable scan

[root@node1 bin]# ./srvctl enable database -d racdb

[root@node1 bin]# ./srvctl start listener

[root@node1 bin]# ./srvctl start vip -n node1

[root@node1 bin]# ./srvctl start vip -n node2

[root@node1 bin]# ./srvctl start scan

[root@node1 bin]# ./srvctl start scan_listener

[root@node1 bin]# ./srvctl start database -d racdb

  1. 查看监听状态

[grid@node1 ~]$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 07-FEB-2023 15:20:48

Copyright (c) 1991, 2021, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1551)))

STATUS of the LISTENER

------------------------

Alias LISTENER

Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production

Start Date 07-FEB-2023 15:19:14

Uptime 0 days 0 hr. 1 min. 33 sec

Trace Level off

Security ON: Local OS Authentication

SNMP OFF

Listener Parameter File /app/19.0.0/grid/network/admin/listener.ora

Listener Log File /app/grid/diag/tnslsnr/node1/listener/alert/log.xml

Listening Endpoints Summary...

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.200.10)(PORT=1551)))

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.200.11)(PORT=1551)))

Services Summary...

Service "racdb" has 1 instance(s).

Instance "hnkdb1", status UNKNOWN, has 1 handler(s) for this service...

The command completed successfully

  1. 修改ASM 和 database的VIP

修改listener.ora, tnsnames.ora和LOCAL_LISTENER/REMOTE_LISTENER 参数反应 VIP 的改变

#grid asm修改

Sqlplus / as sysasm

alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.200.11)((PORT=1551))' scope=both sid=’+ASM1';

alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.200.21)((PORT=1551))' scope=both sid=’+ASM2';

#ORACLE db修改

Sqlplus / as sysdba

alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.200.11)((PORT=1551))' scope=both sid='hnkdb1';

alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.200.21)((PORT=1551))' scope=both sid='hnkdb2';

Listener.ora 和tnsname.ora 修改,此处略

  1. 查看集群状态

[grid@node1 ~]$ crsctl status res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.chad

ONLINE ONLINE node1 STABLE

ONLINE ONLINE node2 STABLE

ora.net1.network

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.ons

ONLINE ONLINE node1 STABLE

ONLINE OFFLINE node2 STABLE

ora.proxy_advm

OFFLINE OFFLINE node1 STABLE

OFFLINE OFFLINE node2 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.ARCH.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.DATA.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node1 shuSTABLE

ora.OCR.dg(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.asm(ora.asmgroup)

1 ONLINE ONLINE node1 Started,STABLE

2 ONLINE ONLINE node2 Started,STABLE

ora.asmnet1.asmnetwork(ora.asmgroup)

1 ONLINE ONLINE node1 STABLE

2 ONLINE ONLINE node2 STABLE

ora.cvu

1 ONLINE ONLINE node1 STABLE

ora.racdb.db

1 OFFLINE OFFLINE Instance Shutdown,ST

ABLE

2 OFFLINE OFFLINE Instance Shutdown,ST

ABLE

ora.node1.vip

1 ONLINE ONLINE node1 STABLE

ora.node2.vip

1 ONLINE INTERMEDIATE node1 FAILED OVER,STABLE

ora.qosmserver

1 ONLINE ONLINE node1 STABLE

ora.scan1.vip

1 ONLINE ONLINE node1 STABLE

  1. 两个节点重启验证

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
在本地自己的操作系统上,完全模拟生产环境,让学员跟着视频一步一步搭建一套在RHEL7操作系统上面的oracle19c的rac环境。同时学员还会学会DNS服务器,DNS客户端,NTP服务器,NTP客户端等操作系统知识。让学员在短时间内,oracle和操作系统知识,有一定的提升。让学员可以独自轻松安装一套oracle rac环境。一课程主题 模拟生产环境,用多路径共享存储,用虚拟软件安装一套RHEL7.5+oracle19c+rac+打补丁二课程特色 完成模拟生产环境,用openfiler软件模拟生产惠普的3par存储,用2个虚拟卡模拟数据库服务器的2个HBA卡。课程以实践为主,从头到尾一步一步教学员怎样操作,实践性强模拟生产需求,完全可以把这套环境拿到生产环境安装三课程内容 1.课程结束成果演示    1.1 用workstation,安装一套rhel7+oracle19c+rac+multipath+共享存储+DNS服务器+DNS客户端+NTP服务器+NTP客户端的生产环境    1.2 怎样打oracle19c的rac补丁(包括GI补丁,oracle补丁,数据库补丁,OJVM补丁,bug补丁)2.安装openfiler软件,模拟共享存储+配置多路径访问    2.1安装openfiler软件   2.2配置openfiler软件(配置2个虚拟卡,模拟服务器的2个HBA卡)   2.3创建ocr磁盘   2.4创建mgmt磁盘   2.5创建数据文件磁盘   2.6创建归档日志磁盘3.安装2台数据库服务器    3.1安装2台数据库服务器RHEL7.5   3.2配置服务器双卡绑定+配置服务器心跳线4.安装多路径软件识别共享存储中的磁盘     4.1安装服务器本地yum源    4.2安装iscsi软件,配置多路径配置文件,识别共享存储中的磁盘    4.3识别ocr磁盘    4.4识别mgmt磁盘    4.5识别数据文件磁盘    4.6识别归档日志磁盘5.oracle19c的rac环境系统参数官方说明     5.1如何配置oracle19c的rac的系统参数(我们参考官方说明)    5.2oracle19c+rac环境Best Practices 官方说明文档6.安装oracle19c+rac之前的准备工作     6.1修改/etc/hosts文件    6.2配置DNS服务器+DNS客户端+NTP服务器+NTP客户端    6.3创建用户和组    6.4创建目录    6.5修改用户环境变量    6.6安装相关软件包    6.7配置ssh互信    6.9禁用服务器透明大页7.安装oracle+19c+rac软件    7.1安装GI软件   7.2创建ASM磁盘,主要是数据文件磁盘和归档日志磁盘   7.3安装数据库软件   7.4创建数据库实例   7.5日常常用维护集群命令(启停数据库,启停集群,查看监听,教同学们怎样不死记命令,而且命令还正确)8.打补丁   8.1打GI和ORACLE的操作系统补丁  8.2打OJVM补丁  8.3打ORA600的bug补丁9.课程总结和成果演示  9.1课程总结和成果演示 四学习必要工具 安装workstation软件  官下载openfiler,rhel7.5软件下载oracle软件(包括19.3的rac安装包,19.4的补丁)以上软件我都已经在视频里面做了下载地址说明五课程纠错1)racip应该是不同的段,我在视频中设置错误。Ensure all private Ethernet interfaces are set to different subnets on each node. If different subnets are not used and connectivity is lost, this can cause a node reboot within the cluster2)配置好multipath,以及多路径的别名后,还要增加如下配置文件。[root@hellorac1 rules.d]# cat /etc/udev/rules.d/99-persistent.rulesENV==data, OWNER:=grid, GROUP:=asmadmin, MODE:=660上面的配置文件增加成功后,运行如下命令:#udevadm control --reload-rules#/sbin/udevadm trigger --type=devices --action=change执行完成之后,会发现/dev/dm*相应的磁盘权限变成grid.asmadmin. 视频中只运行了udevadm control --reload-rules。3)安装GI部分1和部分2,先看部分2,再看部分1.特此纠正上面的3个错误。谢谢大家的支持和厚爱。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

热海鲜橙

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值