oracle 复制组删除,Oracle集群删除节点和克隆节点

目前就职海天起点,服务于电力行业,致力于帮助客户解决生产过程中出现的问题,提高生产效率, 爱好书法,周易!愿结交志同道合之士!共同进步! 微信号:sunyunyi_sun

目的:

该文章主要演示在11.2.0.4环境下删除集群中一个节点和添加一个节点的操作过程。

我的环境为策略管理的数据库,集群两个节点为pmsup1,pmsup2分配给upgrade_pool

serverpool,这里首先删除pmsup1节点,然后使用clone再次将pmsup1节点加入集群中。

步骤:

首先删除pmsup1节点:

Deleting oracle rac from a cluster node

1:deleting instance from oracle rac databases

a: policy-managed databases

emca -deleteNode db -error

srvctl stop instance -d pmssn -i pmssn_1

srvctl modify serverpool -g upgrade_pool -l 1 -u 2 -i 2 -n "pmsup1,pmsup2" -f

srvctl relocate server -n pmsup1 -g Free

b: Administrator-Managed databases

dbca -silent -deleteInstance -nodeList pmsup1 -gdbName pmssn -instanceName pmssn_1

-sysDBAUserName sysdba -sysDBAPassword "Oraclesys"

2: remove oracle rac

srvctl disable listener -l LISTENER -n pmsup1

srvctl stop listener -l LISTENER -n pmsup1

delete node:

$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1

"CLUSTER_NODES={pmsup1}" -local

$ORACLE_HOME/deinstall/deinstall -local

remaming node:

$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1

"CLUSTER_NODES={pmsup2}"

3: Deleting a cluster node

export LANG=C

olsnodes -s -t

if pin then: #crsctl unpin css -n node_to_be_deleted

stop em

$/u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -deconfig -force

$/u01/app/11.2.0.4/grid/deinstall/deinstall -local

note:deinstall -local 命令竟然提示deconfig二号节点,所以执行该命令需要特别注意。

上面的步骤是ORACLE官方的脚本不太好用。

最好使用下面脚本:

rm -rf /etc/oracle/scls_scr

rm -rf /etc/oracle/

rm -f /etc/init.d/init.cssd

rm -f /etc/init.d/init.crs

rm -f /etc/init.d/init.crsd

rm -f /etc/init.d/init.evmd

rm -f /etc/rc2.d/K96init.crs

rm -f /etc/rc2.d/S96init.crs

rm -f /etc/rc3.d/K96init.crs

rm -f /etc/rc3.d/S96init.crs

rm -f /etc/rc5.d/K96init.crs

rm -f /etc/rc5.d/S96init.crs

rm -f /etc/inittab.crs

rm -f /etc/ohasd

rm -f /etc/oraInst.loc

rm -f /etc/oratab

rm -rf /tmp/.oracle

rm -rf /tmp/ora*

rm -rf /var/tmp/.oracle

rm -rf /tmp/CVU*

rm -rf /tmp/Ora*

rm -rf /home/grid/.oracle

rm -rf /u01/*

mv /etc/init.d/init.ohasd /etc/init.d/init.ohasd.bak

需要清理磁盘则:我这里不需要

dd if=/dev/zero of=/dev/mapper/mpath1p1 bs=1M count=256

下面添加pmsup1节点:

首先添加RAC,也就是database:

确保操作系统参数设置正确,grid,oracle用户环境变量设置正确,目录建立正确。

oracle user:

$ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={pmsup1}"

仔细阅读输出信息,执行下面命令

/u01/app/oraInventory/orainstRoot.sh

/u01/app/oracle/product/11.2.0.4/db_1/root.sh

添加grid:

$cluvfy stage -pre nodeadd -n pmsup1 -fixup fixupdir /tmp -verbose

$ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={pmsup1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={pmsup1-vip}"

我这里提示pmsup1已经是集群中的成员,但是软件目录并没有copy成功,提示如下:

SEVERE:The new nodes 'pmsup1' are already part of the cluster

采用clone:

pmsup2节点:

srvctl stop database -d pmssn

crsctl stop has -f

as root user:

cp -prf /u01/app/11.2.0.4/grid /u02

delete unnecessary files:

cd /u02/grid

rm -rf log/pmsup2

rm -rf gpnp/pmsup2

find gpnp -type f -exec rm -f {} \;

find cfgtoollogs -type f -exec rm -f {} \;

rm -rf crs/init/*

rm -rf cdata/*

rm -rf crf/*

rm -rf netwoork/admin/*.ora

find . -name '*.ouibak' -exec rm {} \;

find . -name '*.ouibak.l' -exec rm {} \;

rm -rf root.sh*

rm -rf bin/clsecho/*

rm -rf rdbms/audit/*

rm -rf rdbms/log/*

rm -rf inventory/backup/*

cd /u02/grid

tar -zcvpf /u02/gridHome.tgz .

pmsup1节点:

scp root@pmsup2:/u02/gridHome.tgz /u01/app/11.2.0.4

因为我是在原有基础上添加节点所以下面目录已经存在,如果是新环境则需要添加。

mkdir -p /u01/app/11.2.0.4/grid

cd /u01/app/11.2.0.4/grid

tar -zxvf /u01/app/11.2.0.4/gridHome.tgz

mkdir -p /u01/app//oraInventory

chown oracle

5b24fae4cde99750994428c024162093.gifinstall /u01/app//oraInventory

chown -R oracle

5b24fae4cde99750994428c024162093.gifinstall /u01/app/11.2.0.4/grid

chmod u+s /u01/app/11.2.0.4/grid/bin/oracle

chmod g+s /u01/app/11.2.0.4/grid/bin/oraclekl

chmod u+s /u01/app/11.2.0.4/grid/bin/extjob

chmod u+s /u01/app/11.2.0.4/grid/bin/jssu

chmod u+s /u01/app/11.2.0.4/grid/bin/oradism

开始执行clone.pl:

$ cd /u01/app/11.2.0.4/grid/clone/bin

$ perl clone.pl -silent ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/11.2.0.4/grid

ORACLE_HOME_NAME=OraHome1Grid INVENTORY_LOCATION=/u01/app//oraInventory -O'"CLUSTER_NODES={pmsup1,pmsup2}"'

-O'"LOCAL_NODE=pmsup1"' CRS=TRUE -debug

这里需要注意-O'"写法,Windows环境下不需要-O'",另外12C下已经不需要要这样写,这个不任性化,看起来有点乱。

第一次执行上面命令没有提示执行root.sh 脚本,添加-debug发现grid用户没有权限建立 root.sh脚本,执行如下命令:

chown -R grid

5b24fae4cde99750994428c024162093.gifinstall /u01

再次执行输出如下,没有报错:

-------------------------------------------------------

Initializing Java Virtual Machine from /tmp/OraInstall2018-01-11_09-40-54AM/jre/bin/java. Please wait...

Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

You can find the log of this install session at:

/u01/app/oraInventory/logs/cloneActions2018-01-11_09-40-54AM.log

...........................................................................................[main]

[ 2018-01-11 09:41:00.281 CST ] [QueryCluster.:56]  No Cluster detected

[main] [ 2018-01-11 09:41:00.282 CST ] [QueryCluster.isCluster:65]  Cluster existence check = false

......... 100% Done.

Installation in progress (Thursday, January 11, 2018 9:41:01 AM CST)

.....................................................................                                                           69% Done.

Install successful

Linking in progress (Thursday, January 11, 2018 9:41:04 AM CST)

Link successful

Setup in progress (Thursday, January 11, 2018 9:41:31 AM CST)

................                                                100% Done.

Setup successful

End of install phases.(Thursday, January 11, 2018 9:41:53 AM CST)

WARNING:

The following configuration scripts need to be executed as the "root" user in each new cluster node.

Each script in the list below is followed by a list of nodes.

/u01/app/11.2.0.4/grid/root.sh #On nodes pmsup1

To execute the configuration scripts:

1. Open a terminal window

2. Log in as "root"

3. Run the scripts in each cluster node

Run the script on the local node first. After successful completion, you can run the script in parallel on all

the other nodes.

依据提示执行脚本:

/u01/app/11.2.0.4/grid/root.sh

root.sh 脚本日志如下:

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME=  /u01/app/11.2.0.4/grid

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:

/u01/app/11.2.0.4/grid/perl/bin/perl -I/u01/app/11.2.0.4/grid/perl/lib -I/u01/app/11.2.0.4/grid/crs/install

/u01/app/11.2.0.4

/grid/crs/install/roothas.pl

To configure Grid Infrastructure for a Cluster execute the following command:

/u01/app/11.2.0.4/grid/crs/config/config.sh

This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation,

and the parame

ters can be passed through the response file that is available in the installation media.

提示执行/u01/app/11.2.0.4/grid/crs/config/config.sh

说明:

config脚本默认启动图形界面,我这里网络不能访问自己电脑的6000端口所以需要以静默方式执行config,

静默方式需要提供参数文件grid_install.rsp,grid_install.rsp 文件在grid安装目录response下可以copy,

然后依据里面的内容进行修改添加具体值即可,该文件需要访问模式为600,用户为grid

chmod 600 /home/grid/grid_install.rsp

静默执行config:

To run config.sh silently:-

As grid user:

cd /u01/app/11.2.0.4/grid/crs/config

./config.sh -silent -responseFile /home/grid/grid_install.rsp -ignoreSysPrereqs -ignorePrereq

报错:

[FATAL] [INS-40915] The installer has detected the presence of Oracle Clusterware on the following nodes: pmsup2.

CAUSE: Either Oracle Clusterware is running on the listed nodes, or previous installations of Oracle Clusterware are

not completely deinstalled.

ACTION: For each node listed, ensure that existing Oracle Clusterware is completely deinstalled. You can also choose

not to include these nodes in this installation.

./config.sh -silent -responseFile /home/grid/grid_install.rsp -ignoreSysPrereqs -ignorePrereq -local -debug

报错提示pmsup2节点存在,从这里我们也可以看到,congfig脚本执行前必须要将集群中所有节点deconfig,

然后才能执行成功。这里我们修改参数只包含一个节点如下:

修改下面参数只列出新节点:

oracle.install.crs.config.clusterNodes=pmsup1

4f06a01a81d5603cca001c0e92e5ebda.gifmsup1-vip

再次执行:

./config.sh -silent -responseFile /home/grid/grid_install.rsp -ignoreSysPrereqs -ignorePrereq -local -debug

输出如下:

As a root user, execute the following script(s):

1. /u01/app/11.2.0.4/grid/root.sh

Execute /u01/app/11.2.0.4/grid/root.sh on the following nodes:

[pmsup1]

Successfully Setup Software.

[WARNING] [INS-32091] Software installation was successful. But some configuration assistants failed, were cancelled

or skipped.

ACTION: Refer to the logs or contact Oracle Support Services.

提示配置错误,这个没关系,我们执行root.sh.

其实config.sh 就是建立参数文件/u01/app/11.2.0.4/grid/crs/install/crsconfig_params

这里需要说明我在这里修改crsconfig_params文件将另外一个节点信息也添加在里面。是因为我在这里没有deconfig二号节点,如果首先

deconfig二号节点,grid_install.rsp文件中有二号节点信息,那么crsconfig_params参数文件中也会有二号节点信息。

执行root.sh

/u01/app/11.2.0.4/grid/root.sh

报错:

未能创建磁盘组DATA, 返回消息如下:

ORA-15018: diskgroup cannot be created

ORA-15031: disk specification '/dev/raw/raw1' matches no disks

ORA-15014: path '/dev/raw/raw1' is not in the discovery set

这个报错是因为我将磁盘发现路径写错了,ASM不能建立,修改asm 参数文件和crsconfig_params参数文件,手动启动ASM实例,OK!

SQL> alter diskgroup data mount;

ERROR at line 1:

ORA-15032: not all alterations performed

ORA-15017: diskgroup "DATA" cannot be mounted

ORA-15003: diskgroup "DATA" already mounted in another lock name space

因为二号节点启动,关闭二号节点crs -- crsctl stop has -f

SQL> alter diskgroup data mount;

Diskgroup altered.

再次执行root.sh

日志信息如下:

Installing Trace File Analyzer

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'pmsup1'

CRS-2676: Start of 'ora.cssdmonitor' on 'pmsup1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'pmsup1'

CRS-2672: Attempting to start 'ora.diskmon' on 'pmsup1'

CRS-2676: Start of 'ora.diskmon' on 'pmsup1' succeeded

CRS-2676: Start of 'ora.cssd' on 'pmsup1' succeeded

已成功创建并启动 ASM。

已成功装载磁盘组DATA。

clscfg: -install mode specified

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

CRS-4256: Updating the profile

Successful addition of voting disk 9a74c1f6c5b14f56bfd68550557cc62b.

Successfully replaced voting disk group with +DATA.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

1. ONLINE   9a74c1f6c5b14f56bfd68550557cc62b (/dev/raw/raw1) [DATA]

Located 1 voting disk(s).

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

注意Voting file(s) successfully replaced 这句话,意思vt被replaced,也就是FUID改变了,

那么二号节点一定是启动不了,因为vt FUID都改变了二号节点怎么能启动。FUID就是vt的唯一编号。

启动二号节点日志报错如下:

[cssd(20798)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds;

刚才已经说了,config脚本需要所有节点deconfig,那么二号节点也需要reconfig了:

二号节点:

As root user:

export LANG=C

/u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -deconfig -force

Removing Trace File Analyzer

Successfully deconfigured Oracle clusterware stack on this node

/u01/app/11.2.0.4/grid/root.sh

Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params

User ignored Prerequisites during installation

Installing Trace File Analyzer

PRKO-2190 : VIP exists for node pmsup2, VIP name pmsup2-vip

Preparing packages for installation...

cvuqdisk-1.0.9-1

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

这里如果报错ASM无法访问,手动启动ASM实例,再次执行root.sh即可。

到此clone cluster 就完成了。

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATA.dg

ONLINE  ONLINE       pmsup1

ONLINE  ONLINE       pmsup2

ora.LISTENER.lsnr

ONLINE  ONLINE       pmsup1

ONLINE  ONLINE       pmsup2

ora.asm

ONLINE  ONLINE       pmsup1                   Started

ONLINE  ONLINE       pmsup2                   Started

ora.gsd

OFFLINE OFFLINE      pmsup1

OFFLINE OFFLINE      pmsup2

ora.net1.network

ONLINE  ONLINE       pmsup1

ONLINE  ONLINE       pmsup2

ora.ons

ONLINE  ONLINE       pmsup1

ONLINE  ONLINE       pmsup2

ora.registry.acfs

ONLINE  ONLINE       pmsup1

ONLINE  ONLINE       pmsup2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1        ONLINE  ONLINE       pmsup1

ora.cvu

1        ONLINE  ONLINE       pmsup1

ora.oc4j

1        ONLINE  ONLINE       pmsup1

ora.pmssn.db

1        ONLINE  ONLINE       pmsup1                   Open

2        ONLINE  ONLINE       pmsup2                   Open

ora.pmsup1.vip

1        ONLINE  ONLINE       pmsup1

ora.pmsup2.vip

1        ONLINE  ONLINE       pmsup2

ora.scan1.vip

1        ONLINE  ONLINE       pmsup1

2018-1-11

孙显鹏

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值