oracle 一节点 重装,oracle rac 节点崩溃重装

linux下oracle rac一个节点崩溃,而另一个节点正常,如何重装崩溃节点?请高手赐教!

本人已经重装崩溃节点的系统,但rac如何恢复,是通过节点2删除rac中节点1的信息,然后再新增节点的方法?还是可以不用删除节点信息,直接可以恢复?

具体步骤如何?请高手赐教啊,不甚感激!

node1 已坏

node2 正常

重装node1后,计算机名还是node1,直接可以添加吗?不用删除以前的节点信息?

谁能说下具体步骤啊

先在node2上把node1的信息删干净,然后重新填加node1

步骤如下:

1,在node2上运行DBCA,删除instance;

2,如果有ASM,删除ASM实例,srvctl remove asm -n node1;

3,在node2上执行updateNodeList脚本

runInstaller -updateNodeListORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=node2";

4,在node2上执行rootdeletenode.sh脚本

$CRS_HOME/install/rootdeletenode.sh node1;

5,在node2上执行updateNodeList脚本更新CRS信息

runInstaller -updateNodeListORACLE_HOME=$CRS_HOME "CLUSTER_NODES=node2";

6,看下是不是删除成功了

cluvfy comp crs -n all

下边添加:

1 Install CRS

2 Add ONS

3 Install ASM

4 Config Listener

5 Install DB software

6 Add DB instance into this node

在新的node1上配置和node2完全一样的所有信息,包括环境

1,在node2,以oracle用户进入$CRS_HOME/oui/bin目录,执行addNode.sh脚本

按照步骤添加

2,在node2,以oracle用户进入$ORACLE_HOME/oui/bin目录,执行addNode.sh脚本

按照步骤添加

3,配置listener

在node1上运行netca,选择cluster database,按步骤配置

4,在node2上运行DBCA添加新的instance

先选择...Clusterdatabase...然后instancemanagement然后add aninstance然后...

后边没难度了,不说了自己看提示吧

另:楼主灵活处理上边的某些语句,我只是说明方法,不保证完全和你的一样,你多试几次,自然就成功,自己多想想!总的思路就是先删instance然后删ASM然后删DB然后删CRS信息,添加正好相反

This document is intented to provide thesteps to be taken to remove a node from the Oracle cluster. The node itself is

unavailable due to some OS issue orhardware issue which prevents the node from starting up. This document willprovide

the steps to remove such a node so that itcan be added back after the node is fixed.

The steps to remove a node from a Clusteris already documented in the Oracle documentation at

VersionDocumentation Link

10gR2http://download.oracle.com/docs/... elunix.htm#BEIFDCAF

11ghttp://download.oracle.com/docs/... erware.htm#BEIFDCAF

This note is different because thedocumentation covers the scenario where the node is accessible and the removalis a planned procedure. This note covers the scenario where the Node is unableto boot up and therefore it is not possible

to run the clusterware commands from thisnode.

Solution

Summary

Basically all the steps documented in theOracle® Clusterware Administration and Deployment Guide must be followed. The

difference here is that we skip the stepsthat are to be executed on the node which is not available and we run some

extra commands on the other node which isgoing to remain in the cluster to remove the resources from the node that is

to be removed.

Example Configuration

Node Names Halinux1

Halinux2

OS RHAS 4.0 Update 4 RHAS 4.0 Update 4

Oracle Clusterware Oracle 11g Oracle 11g

Assume that Halinux2 is down due to ahardware failure and cannot even boot up. The plan is to remove it from the

clusterware, fix the issue and then add itagain to the Clusterware. In this document, we will cover the steps to

remove the node from the clusterware

Initial Stage

At this state, the Oracle Clusterware onHalinux1 (Good Node) is up and running. The node Halinux2 is down and cannot

be accessed. Note that the Virtual IP ofhalinux2 is running on Node 1. The rest of halinux2 resources are OFFLINE

Step 1 - Remove oifcfg information for thefailed node

Generally most installations use the globalflag of the oifcfg command and therefore they can skip this step. They can

confirm this using

$oifcfg getif

If the output of the above command returnsglobal as shown below then you can skip this step (executing the command

below on a global defination will return anerror as shown below.

If the output of the oifcfg getif commanddoes not return global then use the following command

$oifcfg delif -node

Step 2 Remove ONS information

Execute as root the following command tofind out the remote port number to be used

$cat $CRS_HOME/opmn/conf/ons.config

and remove the information pertaining thenode to be deleted using

#$CRS_HOME/bin/racgons remove_config  harh2:6200

Step 3 Remove resources

In this step, the resources that weredefined on this node has to be removed. These resources include (a) Database(b)

Instance (c) ASM. A list of this can beacquired by running crs_stat -t command from any node

Step 4 Execute rootdeletenode.sh

From the node that you are not deletingexecute as root the following command which will help find out the node number

of the node that you want to delete

#$CRS_HOME/bin/olsnodes -n

this number can be passed to therootdeletenode.sh command which is to be executed as root from any node whichis going

to remain in the cluster.

#$CRS_HOME/install/rootdeletenode.shhalinux2,2

Step 5 Update the Inventory

From the node which is going to remain inthe cluster run the following command as owner of the CRS_HOME. The argument

to be passed to the CLUSTER_NODES is acomma seperated list of node names of the cluster which are going to remain in

the cluster. This step needs to beperformed from ASM home and RAC home.

$CRS_HOME/oui/bin/runInstaller-updateNodeList ORACLE_HOME=CRS_HOME "CLUSTER_NODES=halinux1"CRS=TRUE

## Optionally enclose the host names with {}

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值