oracle 19C rac 其中一个节点故障,系统重装后重新加入集群

基础信息

Ip

vip

private-ip

hostname

db_name

instance_name

192.168.56.100

192.168.56.102

192.168.100.10

hotdb1

hotdb

hotdb1

192.168.56.101

192.168.56.103

192.168.100.20

hotdb2

hotdb

hotdb2

os version:CentOS Linux release 8.4.2105

db Version:19.17.0.0.0

虚拟机软件:oracle vm virtualbox

模拟节点2整个删除后,重新安装系统并加入到集群

  1. 当前状态

[grid@hotdb1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE hotdb1 STABLE

ONLINE ONLINE hotdb2 STABLE

ora.chad

ONLINE ONLINE hotdb1 STABLE

ONLINE ONLINE hotdb2 STABLE

ora.net1.network

ONLINE ONLINE hotdb1 STABLE

ONLINE ONLINE hotdb2 STABLE

ora.ons

ONLINE ONLINE hotdb1 STABLE

ONLINE ONLINE hotdb2 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.ARCH.dg(ora.asmgroup)

1 ONLINE ONLINE hotdb1 STABLE

2 ONLINE ONLINE hotdb2 STABLE

3 OFFLINE OFFLINE STABLE

ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)

1 ONLINE ONLINE hotdb1 STABLE

2 ONLINE ONLINE hotdb2 STABLE

3 OFFLINE OFFLINE STABLE

ora.DATA.dg(ora.asmgroup)

1 ONLINE ONLINE hotdb1 STABLE

2 ONLINE ONLINE hotdb2 STABLE

3 OFFLINE OFFLINE STABLE

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE hotdb1 STABLE

ora.OCR.dg(ora.asmgroup)

1 ONLINE ONLINE hotdb1 STABLE

2 ONLINE ONLINE hotdb2 STABLE

3 OFFLINE OFFLINE STABLE

ora.asm(ora.asmgroup)

1 ONLINE ONLINE hotdb1 Started,STABLE

2 ONLINE ONLINE hotdb2 Started,STABLE

3 OFFLINE OFFLINE STABLE

ora.asmnet1.asmnetwork(ora.asmgroup)

1 ONLINE ONLINE hotdb1 STABLE

2 ONLINE ONLINE hotdb2 STABLE

3 OFFLINE OFFLINE STABLE

ora.cvu

1 ONLINE ONLINE hotdb1 STABLE

ora.hotdb.db

1 ONLINE ONLINE hotdb1 Open,HOME=/app/oracl

e/product/19.0.0/dbh

ome_1,STABLE

2 ONLINE ONLINE hotdb2 Open,HOME=/app/oracl

e/product/19.0.0/dbh

ome_1,STABLE

ora.hotdb1.vip

1 ONLINE ONLINE hotdb1 STABLE

ora.hotdb2.vip

1 ONLINE ONLINE hotdb2 STABLE

ora.qosmserver

1 ONLINE ONLINE hotdb1 STABLE

ora.scan1.vip

1 ONLINE ONLINE hotdb1 STABLE

--------------------------------------------------------------------------------

当前所有状态都是正常的.

  1. 删除hotdb2的主机

  1. 删除后的状态

  1. hotdb2节点重新安装操作系统(此处省略)

  1. 操作系统配置

创建用户和安装目录,挂在共享磁盘.由于此处不是本次文章的重点,就不过多讨论.可以参照节点一配置.

注意:用户/组还有安装目录要和节点一保持一致.

  1. 重新添加节点验证,比较两个节点配置

节点1上grid用户执行:

[root@hotdb1 rpm]# su - grid

[grid@hotdb1 ~]$ cluvfy comp peer -refnode hotdb1 -n hotdb2

Verifying User Equivalence ...FAILED (PRVG-2019, PRKC-1191)

User equivalence unavailable on all the specified nodes

Verification cannot proceed

Verification of peer compatibility was unsuccessful on all the specified nodes.

Failures were encountered during execution of CVU verification request "peer compatibility".

Verifying User Equivalence ...FAILED

hotdb2: PRVG-2019 : Check for equivalence of user "grid" from node "hotdb1" to

node "hotdb2" failed

PRKC-1191 : Remote command execution setup check for node hotdb2 using

shell /usr/bin/ssh failed.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@IT IS

POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!Someone could be

eavesdropping on you right now (man-in-the-middle attack)!It is also

possible that a host key has just been changed.The fingerprint for the

RSA key sent by the remote host

isSHA256:TEahLSptFHAASSe8QZUgjGXEMU8x2/j9LDhBxjGY7Vo.Please contact

your system administrator.Add correct host key in

/home/grid/.ssh/known_hosts to get rid of this message.Offending RSA

key in /home/grid/.ssh/known_hosts:1RSA host key for hotdb2 has changed

and you have requested strict checking.Host key verification failed.

CVU operation performed: peer compatibility

Date: Feb 24, 2023 8:59:38 AM

CVU home: /app/19.0.0/grid/

User: grid

问题原因:节点二是重新安装的操作系统,之前的互信已经失效,需要重新创建互信.

  1. 创建互信

在节点1上执行

su - grid

cd /app/19.0.0/grid/oui/prov/resources/scripts

[grid@hotdb1 scripts]$ ./sshUserSetup.sh -user grid -hosts "hotdb1 hotdb2" -advanced -confirm -noPromptPassphrase

su - oracle

[oracle@hotdb1 scripts]$ cd $ORACLE_HOME/oui/prov/resources/scripts

[oracle@hotdb1 scripts]$ ./sshUserSetup.sh -user oracle -hosts "hotdb1 hotdb2" -advanced -confirm -noPromptPassphrase

  1. 重新进行添加节点验证

[grid@hotdb1 ~]$ cluvfy stage -pre nodeadd -n hotdb2

Verifying Physical Memory ...FAILED (PRVF-7530)

Verifying Available Physical Memory ...PASSED

Verifying Swap Size ...PASSED

Verifying Free Space: hotdb2:/usr,hotdb2:/var,hotdb2:/etc,hotdb2:/app/19.0.0/gri d,hotdb2:/sbin,hotdb2:/tmp ...PASSED

Verifying Free Space: hotdb1:/usr,hotdb1:/var,hotdb1:/etc,hotdb1:/app/19.0.0/gri d,hotdb1:/sbin,hotdb1:/tmp ...PASSED

Verifying User Existence: oracle ...

Verifying Users With Same UID: 1009 ...PASSED

Verifying User Existence: oracle ...PASSED

Verifying User Existence: grid ...

Verifying Users With Same UID: 1008 ...PASSED

Verifying User Existence: grid ...PASSED

Verifying User Existence: root ...

Verifying Users With Same UID: 0 ...PASSED

Verifying User Existence: root ...PASSED

Verifying Group Existence: asmadmin ...PASSED

Verifying Group Existence: asmoper ...PASSED

Verifying Group Existence: asmdba ...PASSED

Verifying Group Existence: oinstall ...PASSED

Verifying Group Membership: oinstall ...PASSED

Verifying Group Membership: asmdba ...PASSED

Verifying Group Membership: asmadmin ...PASSED

Verifying Group Membership: asmoper ...PASSED

  1. grid软件添加节点

[grid@hotdb1 ~]$ export IGNORE_PREADDNODE_CHECKS=Y

[grid@hotdb1 ~]$ cd $ORACLE_HOME/addnode

[grid@hotdb1 addnode]$ ./addnode.sh -silent -ignorePrereq "CLUSTER_NEW_NODES={ho tdb2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={hotdb2-vip}"

[FATAL] [INS-40912] Virtual host name: hotdb2-vip is assigned to another system on the network.

CAUSE: One or more virtual host names appeared to be assigned to another system on the network.

ACTION: Ensure that the virtual host names assigned to each of the nodes in the cluster are not currently in use, and the IP addresses are registered to the domain name you want to use as the virtual host name.

问题原因:之前的集群信息中还存在节点二的信息,需要先把节点二的信息删除,再添加节点.

  1. 删除hotdb2的实例,节点1上执行

[root@hotdb1 rpm]# su - oracle

[oracle@hotdb1 ~]$ dbca -silent -deleteInstance -nodeName hotdb2 -gdbName hotdb -instanceName hotdb2 -sysDBAUserName sys -sysDBAPassword "oracle"

[WARNING] [DBT-19203] The Database Configuration Assistant will delete the Oracl e instance and its associated OFA directory structure. All information about thi s instance will be deleted.

Prepare for db operation

40% complete

Deleting instance

Unable to copy the file "hotdb2:/etc/oratab" to "/tmp/oratab.hotdb2".

48% complete

52% complete

56% complete

60% complete

64% complete

68% complete

72% complete

76% complete

80% complete

Completing instance management.

100% complete

Instance "hotdb2" deleted successfully from node "hotdb2".

Look at the log file "/app/oracle/cfgtoollogs/dbca/hotdb/hotdb.log" for further details.

通过查看实例已经没有了

  1. 删除软件清单中hotdb2的软件信息,节点1上执行

[oracle@hotdb1 ~]$ cd $ORACLE_HOME/oui/bin

[oracle@hotdb1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODE={hotdb1}"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3313 MB Passed

The inventory pointer is located at /etc/oraInst.loc

  1. 删除hotdb2的vip信息,节点1上执行

[root@hotdb1 rpm]# cd /app/19.0.0/grid/bin/

[root@hotdb1 bin]# ./srvctl stop vip -i hotdb2-vip

[root@hotdb1 bin]# ./srvctl remove vip -i hotdb2-vip

Please confirm that you intend to remove the VIPs hotdb2-vip (y/[n]) y

hotdb2.vip已经没有了

  1. 查看节点信息

[root@hotdb1 bin]# ./olsnodes -s -t

hotdb1 Active Unpinned

[root@hotdb1 bin]# ./olsnodes -n -l

hotdb1 1

  1. 再次节点加入前验证

[root@hotdb1 ~]# su - grid

[grid@hotdb1 ~]$ cluvfy comp peer -refnode hotdb1 -n hotdb2

[grid@hotdb1 ~]$ cluvfy stage -pre nodeadd -n hotdb2

  1. grid软件添加节点,在节点1上执行

[grid@hotdb1 ~]$ export IGNORE_PREADDNODE_CHECKS=Y

[grid@hotdb1 ~]$ cd $ORACLE_HOME/addnode

[grid@hotdb1 addnode]$ ./addnode.sh -silent -ignorePrereq "CLUSTER_NEW_NODES={ho tdb2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={hotdb2-vip}"

提示再节点二上执行

[root@hotdb2 ~]# /app/oraInventory/orainstRoot.sh

Changing permissions of /app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /app/oraInventory to oinstall.

The execution of the script is complete.

[root@hotdb2 ~]# /app/19.0.0/grid/root.sh

Check /app/19.0.0/grid/install/root_hotdb2_2023-03-16_20-34-15-334877092.log for the output of root script

查看日志,报如下错误

这里面关键错误是:

oracle.ops.mgmt.cluster.ClusterException: protocol error: filename does not match request

参考mos文档id:2710279.1 是由于openssh的版本过高,所以scp时发生错误

在hotdb2上执行如下操作

mv /usr/bin/scp /usr/bin/scp.orig

vim /usr/bin/scp

/usr/bin/scp.orig -T $*

chmod 555 /usr/bin/scp

重新执行/app/19.0.0/grid/root.sh

查看集群状态,已经正常

  1. db软件层面添加节点

[root@hotdb1 bin]# su - oracle

[oracle@hotdb1 ~]$ export IGNORE_PREADDNODE_CHECKS=Y

[oracle@hotdb1 ~]$ cd $ORACLE_HOME/addnode

[oracle@hotdb1 addnode]$ ./addnode.sh -silent -ignorePrereq "CLUSTER_NEW_NODES={hotdb2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={hotdb2-vip}"

[WARNING] [INS-06009] SSH performance is detected to be slow, which may impact p erformance during remote node operations like copying the software and executing prerequisite checks.

ACTION: Consider optimizing the ssh configuration.

Prepare Configuration in progress.

Prepare Configuration successful.

.................................................. 7% Done.

Copy Files to Remote Nodes in progress.

.................................................. 12% Done.

.................................................. 18% Done.

..............................

Copy Files to Remote Nodes successful.

You can find the log of this install session at:

/app/oraInventory/logs/addNodeActions2023-03-19_08-39-15PM.log

Instantiate files in progress.

Instantiate files successful.

.................................................. 52% Done.

Saving cluster inventory in progress.

.................................................. 89% Done.

Saving cluster inventory successful.

The Cluster Node Addition of /app/oracle/product/19.0.0/dbhome_1 was successful.

Please check '/app/oraInventory/logs/silentInstall2023-03-19_08-39-15PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.

.................................................. 96% Done.

As a root user, execute the following script(s):

1. /app/oracle/product/19.0.0/dbhome_1/root.sh

Execute /app/oracle/product/19.0.0/dbhome_1/root.sh on the following nodes:

[hotdb2]

Successfully Setup Software.

.................................................. 100% Done.

节点2执行

[root@hotdb2 bin]# /app/oracle/product/19.0.0/dbhome_1/root.sh

Check /app/oracle/product/19.0.0/dbhome_1/install/root_hotdb2_2023-03-19_21-03-4 0-509813024.log for the output of root script

[root@hotdb2 bin]# tail -50f /app/oracle/product/19.0.0/dbhome_1/install/root_hotdb2_2023-03-19_21-03-40-509813024.log Performing root user operation.

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /app/oracle/product/19.0.0/dbhome_1

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

  1. hotdb1节点执行,添加实例

[oracle@hotdb1 addnode]$ dbca -silent -addInstance -nodeName hotdb2 -gdbName hotdb -instanceName hotdb2 -sysDBAUserName sys -sysDBAPassword "oracle"

Prepare for db operation

40% complete

Adding instance

48% complete

52% complete

56% complete

60% complete

64% complete

68% complete

72% complete

80% complete

Completing instance management.

86% complete

100% complete

Instance "hotdb2" added successfully on node "hotdb2".

Look at the log file "/app/oracle/cfgtoollogs/dbca/hotdb/hotdb0.log" for further details.

  1. 查看集群状态

[grid@hotdb1 ~]$ crsctl status res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE hotdb1 STABLE

ONLINE ONLINE hotdb2 STABLE

ora.chad

ONLINE ONLINE hotdb1 STABLE

ONLINE ONLINE hotdb2 STABLE

ora.net1.network

ONLINE ONLINE hotdb1 STABLE

ONLINE ONLINE hotdb2 STABLE

ora.ons

ONLINE ONLINE hotdb1 STABLE

ONLINE ONLINE hotdb2 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.ARCH.dg(ora.asmgroup)

1 ONLINE ONLINE hotdb1 STABLE

2 ONLINE ONLINE hotdb2 STABLE

3 OFFLINE OFFLINE STABLE

ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)

1 ONLINE ONLINE hotdb1 STABLE

2 ONLINE ONLINE hotdb2 STABLE

3 ONLINE OFFLINE STABLE

ora.DATA.dg(ora.asmgroup)

1 ONLINE ONLINE hotdb1 STABLE

2 ONLINE ONLINE hotdb2 STABLE

3 OFFLINE OFFLINE STABLE

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE hotdb1 STABLE

ora.OCR.dg(ora.asmgroup)

1 ONLINE ONLINE hotdb1 STABLE

2 ONLINE ONLINE hotdb2 STABLE

3 OFFLINE OFFLINE STABLE

ora.asm(ora.asmgroup)

1 ONLINE ONLINE hotdb1 Started,STABLE

2 ONLINE ONLINE hotdb2 Started,STABLE

3 OFFLINE OFFLINE STABLE

ora.asmnet1.asmnetwork(ora.asmgroup)

1 ONLINE ONLINE hotdb1 STABLE

2 ONLINE ONLINE hotdb2 STABLE

3 OFFLINE OFFLINE STABLE

ora.cvu

1 ONLINE ONLINE hotdb1 STABLE

ora.hotdb.db

1 ONLINE ONLINE hotdb1 Open,HOME=/app/oracl

e/product/19.0.0/dbh

ome_1,STABLE

2 ONLINE ONLINE hotdb2 Open,HOME=/app/oracl

e/product/19.0.0/dbh

ome_1,STABLE

ora.hotdb1.vip

1 ONLINE ONLINE hotdb1 STABLE

ora.hotdb2.vip

1 ONLINE ONLINE hotdb2 STABLE

ora.qosmserver

1 ONLINE ONLINE hotdb1 STABLE

ora.scan1.vip

1 ONLINE ONLINE hotdb1 STABLE

--------------------------------------------------------------------------------

  1. 至此节点2重新配置完成

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

热海鲜橙

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值