oracle rac节点重构

方法一(2节点软件不删)

1.踢出节点

使用root用户登录,并执行下面的命令(所有节点,但最后一个节点除外) (踢出当前节点)(perl失败,见4)
# perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
在这里插入图片描述

2.跑root脚本加回节点

root跑
# GRID_HOME/root.sh
在这里插入图片描述

方法二(2节点软件、实例全卸载)

1.删实例

静默
(静默的话,要在1节点跑,删2节点)(节点名、global_names(没设就和service name一样)、实例名、sys密码)
[oracle@rac1 ~]$ dbca -silent -deleteinstance -nodelist rac2 -gdbname orcl -instancename orcl2 -sysdbausername sys -sysdbapassword oracle
在这里插入图片描述

图形化dbca
dbca-实例管理-删除节实例-选择服务输入密码-选择inactive实例-确认删除

验证节点
select inst_id,instance_name from gv$instance;
在这里插入图片描述

2.在保留节点使用oracle用户更新集群列表(node1我写的是节点主机名)

[oracle@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={node1}”

3.移除集群中二节点的VIP

停掉本地监听服务
[grid@node1 ~]$ crsctl stop res ora.LISTENER.lsnr
CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ‘node2’
CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ‘node1’
CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ‘node2’ succeeded
CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ‘node1’ succeeded

停止二节点VIP(停不下来尝试直接删):
[root@node1 ~]$ /oracle/grid/crs_1/bin/srvctl stop vip -i node2-vip

删除二节点VIP:
[root@node1 ~]# /oracle/grid/crs_1/bin/srvctl remove vip -i node2-vip -f

查看状态
crsctl status res -t

2节点的vip信息已经删除
在这里插入图片描述

4.查看集群节点信息
[grid@node1 ~]$ olsnodes -s -t
node1 Active Unpinned
node2 Active Unpinned
(如果二节点是ping状态,需要执行这步:[grid@rac1 ~]$crsctl unpin css -n rac2)

5.删除节点2
2节点关集群
[root@node2 ~]# /oracle/grid/crs_1/bin/crsctl stop crs

如果关不下来服务,状态为unknow导致,可以直接disable crs,然后重启服务器即可。
root:
/u01/app/11.2.0/grid/bin/crsctl disable crs

1节点上删除2节点
[root@rac1 bin]# $GRID_HOME/bin/crsctl delete node -n rac2
CRS-4661: Node rac2 successfully deleted.

这个报错就是2节点没关
CRS-4658: The clusterware stack on node node2 is not completely down. CRS-4000: Command Delete failed, or completed with errors.

6.更新GI的inventory信息
su - gird
cd $ORACLE_HOME/oui/bin
[grid@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac1}” CRS=TRUE -local

Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
‘UpdateNodeList’ was successful.

7.重新添加二节点到集群中
7.1前提条件
(1):添加相应的用户和组,用户及用户组ID相同
(2):配置 hosts文件 ,新增节点和原有都配置为相同
(3):配置系统参数,用户参数和原有节点一样,配置网络
(4):创建相应的目录,并保证权限对应(根据实际情况创建目录,非常重要)
mkdir /oracle/app
mkdir /oracle/grid/crs_1
mkdir /oracle/gridbase
mkdir /oracle/oraInventory
chown oracle:oinstall /oracle/app
chown grid:oinstall /oracle/grid/
chown grid:oinstall /oracle/grid/crs_1
chown grid:oinstall /oracle/gridbase
chown grid:oinstall /oracle/oraInventory
chmod 770 /oracle/oraInventory
5)检查多路径,盘权限

7.2配置用户之间的SSH、安装集群rpm包:
到grid软件解压目录下:
[root@rac1 sshsetup]# cd /oracle/grid/grid/sshsetup
grid用户的ssh:
./sshUserSetup.sh -user grid -hosts “rac1 rac2” -advanced –noPromptPassphrase
oracle用户的ssh:
./sshUserSetup.sh -user oracle -hosts “rac1 rac2” -advanced –noPromptPassphrase
将grid软件目录下的rpm包传到二节点:
[grid@rac1 ~]$scp cvuqdisk-1.0.9-1.rpm 192.168.40.102:/home/grid

二节点安装rpm包:
[grid@rac2 ~]$ rpm-ivh cvuqdisk-1.0.9-1.rpm
若你没有oinstall组,安装可能报错。此时手动export DISKGROUP=dba

7.3检查rac2是否满足rac安装条件
1.检查网络和存储:
[grid@racdb1 ~]$ cluvfy stage -post hwos -n rac2
Check: TCP connectivity of subnet “10.0.0.0”
Source Destination Connected?
——— —————— —————
rac1:192.168.40.101 rac2:10.0.0.3 failed

ERROR:
PRVF-7617 : Node connectivity between “rac1 : 192.168.40.101” and “rac2 : 10.0.0.3” failed
Result: TCP connectivity check failed for subnet “10.0.0.0”

Result: Node connectivity check failed
若出现上述报错,可忽略。

2.检查rpm包、磁盘空间等:
[grid@racdb1 ~]$ cluvfy comp peer -n rac2

3.整体检查
[grid@racdb1 ~]$ cluvfy stage -pre nodeadd -n rac2 -fixup -verbose

7.4添加节点(路径记得清理,不然有root属组的写不进)
如果报错,可重新执行上方的更新GI的inventory信息。

如果这个报错,可以将好的节点的ORACLE_HOME tar过来,然后再添加节点即可
在这里插入图片描述

grid用户在grid_home/oui/bin目录下执行:
忽略addnote的时候进行的自检(因为我们不使用DNS和NTP,若addnode的时候自检不通过,则无法增加节点)
执行前删除日志小文件,特别是审计日志、trace日志,不然复制很慢

添加节点,需要忽略部分先决条件(1节点执行)
$ cd $ORACLE_HOME/oui/bin
$ export IGNORE_PREADDNODE_CHECKS=Y
$ ./addNode.sh “CLUSTER_NEW_NODES={rac2}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}”

日志:
Each script in the list below is followed by a list of nodes.
/oracle/oraInventory/orainstRoot.sh #On nodes rac2
/oracle/grid/crs_1/root.sh #On nodes rac2
To execute the configuration scripts:
1. Open a terminal window
2. Log in as “root”
3. Run the scripts in each cluster node

The Cluster Node Addition of /oracle/grid/crs_1 was unsuccessful.
Please check ‘/tmp/silentInstall.log’ for more details.

跑以上提示脚本
(1)[root@rac2 oracle]# /oracle/oraInventory/orainstRoot.sh

(2)[root@rac2 oracle]# /oracle/grid/crs_1/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/grid/crs_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/grid/crs_1/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’…
Operation successful.
PRKO-2190 : VIP exists for node rac2, VIP name rac2-vip
PRKO-2420 : VIP is already started on node(s): rac2
Configure Oracle Grid Infrastructure for a Cluster … succeeded

7.5添加新节点数据库:(在一节点操作)
oracle用户:
cd $ORACLE_HOME/oui/bin
./addNode.sh -silent “CLUSTER_NEW_NODES={pacs02}”

二节点运行脚本:(具体脚本 按照上面的反馈来,只在2节点执行)
[root@rac2 oracle]# /oracle/app/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/app/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

7.6创建二节点实例(在一节点操作)
Dg库需要修改db_unique_name和db_name一致,不然会报错,等加好实例再把db_unique_name修改回去
[oracle@rac1 bin]$ dbca -silent -addInstance -nodeList rac2 -gdbName orcl -instanceName orcl2 -sysDBAUserName sys -sysDBAPassword “oracle”

Adding instance
1% complete
2% complete
6% complete
13% complete
20% complete
27% complete
28% complete
34% complete
41% complete
48% complete
54% complete
66% complete
Completing instance management.
76% complete
100% complete
Look at the log file “/oracle/app/cfgtoollogs/dbca/orcl/orcl.log” for further details.

7.7验证集群状态等
[grid@rac2 ~]$ crs_stat -t
Name Type Target State Host
——— —— ——— —— ——
ora.DATA.dg ora…up.type ONLINE ONLINE rac1
ora…ER.lsnr ora…er.type ONLINE ONLINE rac1
ora…N1.lsnr ora…er.type ONLINE ONLINE rac1
ora.OCRVOT.dg ora…up.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora…network ora…rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
ora.ons ora.ons.type ONLINE ONLINE rac1
ora.orcl.db ora…se.type ONLINE ONLINE rac1
ora…SM1.asm application ONLINE ONLINE rac1
ora…C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora…t1.type ONLINE ONLINE rac1
ora…SM2.asm application ONLINE ONLINE rac2
ora…C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora…t1.type ONLINE ONLINE rac2
ora…ry.acfs ora…fs.type ONLINE ONLINE rac1
ora.scan1.vip ora…ip.type ONLINE ONLINE rac1

[grid@rac2 ~]$ srvctl status db -d orcl
Instance orcl1 is running on node rac1
Instance orcl2 is running on node rac2

SQL> select inst_id,status from gv$instance;

INST_ID STATUS
———— ————
3 OPEN
1 OPEN

SQL> select open_mode from v$database;

OPEN_MODE
——————
READ WRITE

  • 17
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

汪灵骅

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值