oracle 11g RAC 修改public ip ,vip , scan ip

有的时候因为网络环境更变,数据库需要更改public ip ,vip 和 scan ip

环境:redhat 5.7 + oracle 11.2.0.3  RAC   + raw

实例名:ASM   、+ASM1  、   +ASM2

数据库名:racdb

数据库实例名:racdb1  、 racdb2


原来IP信息:

# add by RAC install
# public  eth0
10.10.11.26    rac01
10.10.11.46    rac02

# vip    eth0:1
10.10.11.27    rac01-vip
10.10.11.47    rac02-vip

# private  eth1
192.168.1.5   rac01-priv
192.168.1.6   rac02-priv

# single client access name  (scan)
10.10.11.103   rac-scan


修改为下列对应ip:(主机名不修改,修改主机名要重新安装CRS)


# add by RAC install
# public  eth1
192.168.1.5    rac01
192.168.1.6    rac02

# vip    eth1:1
192.168.1.222    rac01-vip
192.168.1.223    rac02-vip

# private  eth0
10.10.11.26   rac01-priv
10.10.11.46   rac02-priv

# single client access name  (scan)
192.168.1.225   rac-scan


修改步骤:

一:备份ocr 、和 voting  disk


二:关闭RAC所有有关应用:(包括crs daemon)

[root@rac01 ~]# /u01/11.2.0.3/grid/bin/crsctl stop cluster -all       或者/u01/11.2.0.3/grid/bin/crsctl stop cluster -n rac01   / /u01/11.2.0.3/grid/bin/crsctl stop cluster -n rac02

[root@rac01 ~]# /u01/11.2.0.3/grid/bin/crsctl status res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.


如果关闭报错,关闭不了,用ps -ef | grep ora_               ps -ef | grep ASM    直接kill掉       --------PS:最好就是上面一条命令,两个节点都正常关闭了。

另外:

mos上有一个workaround,可以手动Kill掉那些crs的进程;当然了,在正式环境中还是得把psu打上。
ps -fea | grep ohasd.bin | grep -v grep
ps -fea | grep gipcd.bin | grep -v grep
ps -fea | grep mdnsd.bin | grep -v grep
ps -fea | grep gpnpd.bin | grep -v grep
ps -fea | grep crsd.bin | grep -v grep
ps -fea | grep evmd.bin | grep -v grep
ps -fea | grep crsd.bin | grep -v grep

kill -9  xxx  xxx  xxx  xxx



三:修改 /etc/hosts 和两个网卡IP信息

[root@rac01 ~]# vi /etc/hosts

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6


# add by RAC install
# public  eth1
192.168.1.5    rac01
192.168.1.6    rac02

# vip    eth1:1
192.168.1.222    rac01-vip
192.168.1.223    rac02-vip

# private  eth0
10.10.11.26   rac01-priv
10.10.11.46   rac02-priv

# single client access name  (scan)
192.168.1.225   rac-scan

这里两个节点都要一样配置。



[root@rac01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.1.5
NETMASK=255.255.255.0
GATEWAY=192.168.1.1

[root@rac01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0
BOOTPROTO=static
#HWADDR=00:50:56:9B:3B:76
ONBOOT=yes
#HOTPLUG=no
#DHCP_HOSTNAME=localhost.localdomain
IPADDR=10.10.11.26
NETMASK=255.255.255.0
GATEWAY=10.10.11.1


[root@rac02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)

DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.1.6
NETMASK=255.255.255.0
GATEWAY=192.168.1.1

[root@rac02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
#HWADDR=00:50:56:9b:78:5c
IPADDR=10.10.11.46
NETMASK=255.255.255.0
GATEWAY=10.10.11.1


PS:这里的public(eth1)必须要有网关。



四:开启crs资源并关闭随crs启动的服务

[root@rac01 ~]# /u01/11.2.0.3/grid/bin/crsctl start cluster -n rac01

[root@rac02 ~]# /u01/11.2.0.3/grid/bin/crsctl start cluster -n rac02


关闭RAC所有资源(包括database、asm、nodeapps)但是crs daemon 要运行,就是不关闭cluster ware

[grid@rac01 ~]$ /u01/11.2.0.3/grid/bin/crsctl stop resource -all

[root@rac02 ~]# /u01/11.2.0.3/grid/bin/crsctl stop resource -all         -------这里用的 -all  会去关闭两个节点的所有资源,这样没问题就是了,我不知道关闭某个节点的所有资源

(你可以在每个节点上关闭dataase  、asm  、nodeapps

 srvctl stop database -d racdb      关闭数据库
 srvctl stop asm -n rac1                关闭各节点的asm实例
 srvctl stop nodeapps -n rac1       关闭各节点的服务,包括gsd,ons,vip以及监听器
 )

PS:开启的顺序相反


[root@rac01 ~]# /u01/11.2.0.3/grid/bin/crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               OFFLINE OFFLINE      rac01                                       
               OFFLINE OFFLINE      rac02                                       
ora.DATADG.dg
               OFFLINE OFFLINE      rac01                                       
               OFFLINE OFFLINE      rac02                                       
ora.LISTENER.lsnr
               OFFLINE OFFLINE      rac01                                       
               OFFLINE OFFLINE      rac02                                       
ora.asm
               OFFLINE OFFLINE      rac01                    Instance Shutdown  
               OFFLINE OFFLINE      rac02                    Instance Shutdown  
ora.gsd
               OFFLINE OFFLINE      rac01                                       
               OFFLINE OFFLINE      rac02                                       
ora.net1.network
               OFFLINE OFFLINE      rac01                                       
               OFFLINE OFFLINE      rac02                                       
ora.ons
               OFFLINE OFFLINE      rac01                                       
               OFFLINE OFFLINE      rac02                                       
ora.registry.acfs
               OFFLINE OFFLINE      rac01                                       
               OFFLINE OFFLINE      rac02                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        OFFLINE OFFLINE                                                  
ora.cvu
      1        OFFLINE OFFLINE                                                  
ora.oc4j
      1        OFFLINE OFFLINE                                                  
ora.rac01.vip
      1        OFFLINE OFFLINE                                                  
ora.rac02.vip
      1        OFFLINE OFFLINE                                                  
ora.racdb.db
      1        OFFLINE OFFLINE                               Instance Shutdown  
      2        OFFLINE OFFLINE                               Instance Shutdown  
ora.scan1.vip
      1        OFFLINE OFFLINE




五:修改public和private网卡:

[root@rac01 ~]# /u01/11.2.0.3/grid/bin/oifcfg getif                                                                                 ---------------------获取当前信息
[root@rac01 ~]# /u01/11.2.0.3/grid/bin/oifcfg delif -global eth0                                                             ---------------------删除当前public网卡eth0
[root@rac01 ~]# /u01/11.2.0.3/grid/bin/oifcfg setif -global eth1/192.168.1.0:public                              ----------------------添加新的public网卡eth1
[root@rac01 ~]# /u01/11.2.0.3/grid/bin/oifcfg setif -global eth0/10.10.11.0:cluster_interconnect          ----------------------设置新的cluster_interconnect  ----eth0
[root@rac01 ~]# /u01/11.2.0.3/grid/bin/oifcfg delif -global eth1                                                             ----------------------删除eth1(这里是public和private共用了,由于删除eth0时)
[root@rac01 ~]# /u01/11.2.0.3/grid/bin/oifcfg setif -global eth1/192.168.1.0:public                               ----------------------重新添加eth1为public网卡
[root@rac01 ~]# /u01/11.2.0.3/grid/bin/oifcfg getif


六:修改public-vip  :

[root@rac01 ~]# /u01/11.2.0.3/grid/bin/srvctl modify nodeapps -n rac01 -A 192.168.1.222/255.255.255.0/eth1
[root@rac01 ~]# /u01/11.2.0.3/grid/bin/srvctl modify nodeapps -n rac02 -A 192.168.1.223/255.255.255.0/eth1

[root@rac01 ~]# /u01/11.2.0.3/grid/bin/crsctl modify res ora.net1.network -attr USR_ORA_SUBNET=192.168.1.0     ------ 要先修改scan的掩码
[root@rac01 ~]# /u01/11.2.0.3/grid/bin/srvctl modify scan -n rac-scan                                                                           -------再通过修改scan的名字达到修改scan ip



七:启动RAC所有进程和资源:

[root@rac01 ~]# /u01/11.2.0.3/grid/bin/crsctl start cluster -all

[root@rac01 ~]# /u01/11.2.0.3/grid/bin/crsctl start resource -all

这里建议在两个节点上都执行下这两个命令。


八:修改两个节点grid和oracle数据库中的监听信息 local_listener 参数

grid和oracle用户登入数据库查看:
show parameter local_l                                                                 ----------------这里的监听IP应该是public-vip
select instance_name,status from v$instance;


用以下命令:(grid是单独的,但是oracle更改如果不加scop的话,所有节点都会更改,所有oracle的需要指定节点,建议grid和oracle都要指定)

grid:(节点一)
alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL= TCP)(HOST=192.168.1.222)(PORT=1521))))' scope=both sid='+ASM1';
oracle:
alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL= TCP)(HOST=192.168.1.222)(PORT=1521))))' scope=both sid='racdb1';


grid:(节点二)
alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL= TCP)(HOST=192.168.1.223)(PORT=1521))))' scope=both sid='+ASM2';
oracle:
alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL= TCP)(HOST=192.168.1.223)(PORT=1521))))' scope=both sid='racdb2';


如果需要修改 scan_listener的信息提供一下命令:

srvctl  modify  scan_listener -p 1521                                        ------grid用户执行,修改scan的监听端口
alter system set remote_listener='rac-scan:1521';                   ------oracle用户登入数据库执行,修改remote_listener,监听远程服务器的实例  ----实现RAC负载均衡的时候需要
alter system set remote_listener='';                                          ------去掉远程监听



九:检查各状态是否正常

[grid@rac01 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       rac01                                       
               ONLINE  ONLINE       rac02                                       
ora.DATADG.dg
               ONLINE  ONLINE       rac01                                       
               ONLINE  ONLINE       rac02                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac01                                       
               ONLINE  ONLINE       rac02                                       
ora.asm
               ONLINE  ONLINE       rac01                    Started            
               ONLINE  ONLINE       rac02                    Started            
ora.gsd
               OFFLINE OFFLINE      rac01                                       
               OFFLINE OFFLINE      rac02                                       
ora.net1.network
               ONLINE  ONLINE       rac01                                       
               ONLINE  ONLINE       rac02                                       
ora.ons
               ONLINE  ONLINE       rac01                                       
               ONLINE  ONLINE       rac02                                       
ora.registry.acfs
               ONLINE  ONLINE       rac01                                       
               ONLINE  ONLINE       rac02                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac02                                       
ora.cvu
      1        ONLINE  ONLINE       rac01                                       
ora.oc4j
      1        ONLINE  ONLINE       rac01                                       
ora.rac01.vip
      1        ONLINE  ONLINE       rac01                                       
ora.rac02.vip
      1        ONLINE  ONLINE       rac02                                       
ora.racdb.db
      1        ONLINE  ONLINE       rac01                    Open               
      2        ONLINE  ONLINE       rac02                    Open               
ora.scan1.vip
      1        ONLINE  ONLINE       rac02


[grid@rac01 ~]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 05-JAN-2015 22:00:11

Copyright (c) 1991, 2011, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                05-JAN-2015 21:57:29
Uptime                    0 days 0 hr. 2 min. 42 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/11.2.0.3/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/rac01/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.5)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.222)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "racdb" has 1 instance(s).
  Instance "racdb1", status READY, has 1 handler(s) for this service...
The command completed successfully


[grid@rac01 ~]$ srvctl status nodeapps
VIP rac01-vip is enabled
VIP rac01-vip is running on node: rac01
VIP rac02-vip is enabled
VIP rac02-vip is running on node: rac02
Network is enabled
Network is running on node: rac01
Network is running on node: rac02
GSD is disabled
GSD is not running on node: rac01
GSD is not running on node: rac02
ONS is enabled
ONS daemon is running on node: rac01
ONS daemon is running on node: rac02



[grid@rac01 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node rac01
Instance racdb2 is running on node rac02



[grid@rac01 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online




十:附上检查RAC的一般命令


1.检查资源是否都online,除了gsd
/u01/11.2.0.3/grid/bin/crsctl status resource –t
#crs_stat –t
#crs_stat –t –v
#crsctl status res –t      是第一条命令简写
PS:crs_ 即将废弃,用crsctl代替


2.检查asm/listneer/node
olsnodes –n                #查看有几个节点
srvctl status listener
srvctl status scan_listener
srvctl status asm –a

srvctl status nodeapps
srvctl status nodeapps  -n  rac01
  ….   config……………
 …… .start………………..


3检查数据库状态和配置、数据库实例状态
srvctl status database -d racdb
srvctl config database -d racdb –a

srvctl status instance -d racdb -n rac01
srvctl status instance -d racdb -i racdb1


4.检查ora/ASM进程
ps -ef|grep ASM
ps –ef | grep ora_


5.查看监听器LISTENER
ps -ef | grep lsnr|grep -v 'grep'| grep -v 'ocfs'|awk '{print $9}'


6.检查OCR/ctss/CRS
ocrcheck

crsctl check ctss

crsctl check crs
#crsctl status crs
#crsctl start crs

7.检查css votedisk
crsctl query css votedisk


8.检查时间同步
cluvfy comp clocksync –verbose


9.检查ssh
rac01:

ssh rac02 date;date
ssh rac-scan date;date
ssh rac02-vip date;date
ssh rac02-priv date;date

rac02:

ssh rac01 date;date
ssh rac-scan date;date
ssh rac01-vip date;date
ssh rac01-priv date;date

10.查看节点的vip和scan ip
srvctl  config nodeapps
srvctl  config scan




---end---



  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值