ORACLE集群管理-19c RAC ipv6+IPV4双栈配置实战

1 关于IPV6支持问题

单实例环境要支持IPV6 数据库版本至少11.2.0.4版本 ,RAC环境需要再12.2以上。

其实从linux7开始系统默认开启ipv6,怎么确认ipv6是否开启呢?下面介绍两种常见的方法:

1.通过查看网卡属性确定

ifconfig-a

命令输出有“inet6......“的表示开启了ipv6功能

2.通过内核模块加载信息查看

lsmod| grep ipv6

该默认地址经测试无法ping通,需要自定义地址,修改网卡配置文件,新增内容如下:

3 网络配置 信息 

Network Address Configuration in a Cluster
You can configure a network interface for either IPv4, IPv6, or both types of addresses
on a given network.
If you configure redundant network interfaces using a third-party technology, then
Oracle does not support configuring one interface to support IPv4 addresses and the
other to support IPv6 addresses. You must configure network interfaces of a redundant
interface pair with the same IP address type. If you use the Oracle Clusterware
Redundant Interconnect feature, then you must use IPv4 addresses for the interfaces.
All the nodes in the cluster must use the same IP protocol configuration. Either all the
nodes use only IPv4, or all the nodes use only IPv6, or all the nodes use both IPv4
and IPv6. You cannot have some nodes in the cluster configured to support only IPv6
addresses, and other nodes in the cluster configured to support only IPv4 addresses.

要不全部V4 要不全部V6,私网不能V6


The local listener listens on endpoints based on the address types of the subnets
configured for the network resource. Possible types are IPV4, IPV6, or both.

其中对应的subnet为gw-1

2409:8002:5A06:0120:0010:0000:0002:D001 =gateway

2409:8002:5A06:0120:0010:0000:0002:D000 =subnet

116为netmask

Changing Static IPv4 Addresses To Static IPv6 Addresses Using
SRVCTL

核心思想:

1 保障所有网卡对应的地址为 static模式 

If the IPv4 network is in mixed mode with both static and dynamic
addresses, then you cannot perform this procedure. You must first transition
all addresses to static.

2  修改/etc/hosts文件

添加IPV4和IPV6到host文件。

3  To change a static IPv4 address to a static IPv6 address:

1. Add an IPv6 subnet using the following command as  root once for the entire
network:

# srvctl modify network -subnet ipv6_subnet/prefix_length

In the preceding syntax  ipv6_subnet/prefix_length is the subnet of the IPv6
address to which you are changing along with the prefix length, such as 3001::/64

[root@orcl1 bin]# ./srvctl modify network -subnet 2409:8002:5A06:0120:0010:0000:0002:D000/116/eno3

[root@orcl1 bin]# ./srvctl config network
Network 1 exists
Subnet IPv4: 192.168.12..0/255.255.255.192/eno3, static
Subnet IPv6: 2409:8002:5a06:120:10:0:2:d000/116/eno3, static (inactive)
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
[root@orcl1 bin]# 

4  数据库VIP添加IPV6接口

2. Add an IPv6 VIP using the following command as  root once on each node:

# srvctl modify vip -node node_name -netnum network_number -address
vip_name/netmask

[root@orcl1 bin]# ./srvctl modify vip  -node orcl1 -netnum 1 -address orcl1-vip/116
[root@orcl1 bin]# ./srvctl modify vip  -node orcl2 -netnum 1 -address orcl2-vip/116

[root@orcl2 bin]# ./srvctl config vip -node orcl2
VIP exists: network number 1, hosting node orcl2
VIP Name: orcl2-vip
VIP IPv4 Address: 192.168.12..17
VIP IPv6 Address: 2409:8002:5a06:120:10:0:2:d011 (inactive)
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[root@orcl2 bin]# 


[root@orcl1 bin]# ./srvctl config vip -node orcl1
VIP exists: network number 1, hosting node orcl1
VIP Name: orcl1-vip
VIP IPv4 Address: 192.168.12..16
VIP IPv6 Address: 2409:8002:5a06:120:10:0:2:d010 (inactive)
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[root@orcl1 bin]# 

In the preceding syntax:
• node_name is the name of the node
• network_number is the number of the network
• vip_name/netmask is the name of a local VIP that resolves to both IPv4 and
IPv6 addresses
The IPv4 netmask or IPv6 prefix length that follows the VIP name must satisfy
two requirements:
– If you specify a netmask in IPv4 format (such as 255.255.255.0), then
the VIP name resolves to IPv4 addresses (but can also resolve to IPv6
addresses). Similarly, if you specify an IPv6 prefix length (such as 64),
then the VIP name resolves to IPv6 addresses (but can also resolve to
IPv4 addresses).
– If you specify an IPv4 netmask, then it should match the netmask of
the registered IPv4 network subnet number, regardless of whether the
-iptype of the network is IPv6. Similarly, if you specify an IPv6 prefix
length, then it must match the prefix length of the registered IPv6 network
subnet number, regardless of whether the  -iptype of the network is IPv4.

或者采用重新添加的方式

必须首先删除节点VIP,然后重新添加。

4 修改PUBLIC IP

3. Add the IPv6 network resource to OCR using the following command:

$ oifcfg setif -global if_name/subnet:public

oifcfg setif -global eno3/2409:8002:5A06:0120:0010:0000:0002:D000:public

[grid@orcl1 ~]$ oifcfg getif
eno3  192.168.12..0  global  public
ens3f0  10.2.0.0  global  cluster_interconnect,asm
eno3  2409:8002:5A06:0120:0010:0000:0002:D000  global  public

5 修改SCAN

4. Update the SCAN in DNS to have as many IPv6 addresses as there are IPv4
addresses. Add IPv6 addresses to the SCAN VIPs using the following command
as  root once for the entire network:

scan_name is the name of a SCAN that resolves to both IPv4 and IPv6 addresses.

[root@orcl1 bin]# ./srvctl modify scan -scanname eomsdb-scan

6 将V4和V6同时生效

Convert the network IP type from IPv4 to both IPv4 and IPv6 using the following
command as  root once for the entire network:
srvctl modify network -netnum network_number -iptype both
This command brings up the IPv6 static addresses.
[root@orcl1 bin]# ./srvctl modify network -netnum 1 -iptype both

6. Change all clients served by the cluster from IPv4 networks and addresses to IPv6
networks and addresses.

7 如果想只用IPV6则可以进行如下配置

7. Transition the network from using both protocols to using only IPv6 using the
following command:

如果想只用IPV6则可以进行如下配置

[root@eomsdb2 bin]# ./srvctl modify network -netnum 1  -iptype IPV6

Modify the VIP using a VIP name that resolves to IPv6 by running the following
command as  root :
# srvctl modify vip -node node_name -address vip_name -netnum
network_number

Do this once for each node.
9. Modify the SCAN using a SCAN name that resolves to IPv6 by running the
following command:
# srvctl modify scan -scanname scan_name
Do this once for the entire cluster.

6 如果本地监听不存在v6地址 配置 如下 ;

在 grid用户下的listener.ora文件中添加如下:

LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))(ADDRESS = (PROTOCOL = TCP)(HOST =2409:8002:5A06:0120:0010:0000:0002:D011)(PORT = 1521)(IP=FIRST))(ADDRESS = (PROTOCOL = TCP)(HOST =2409:8002:5Ac06:0120:0010:0000:0002:D007)(PORT = 1521)(IP=FIRST))))
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))(ADDRESS = (PROTOCOL = TCP)(HOST =2409:8002:5A06:0120:0010:0000:0002:D011)(PORT = 1521)(IP=FIRST))(ADDRESS = (PROTOCOL = TCP)(HOST =2409:8002:5Ac06:0120:0010:0000:0002:D007)(PORT = 1521)(IP=FIRST))))
 #  增加ipv6的地址

第一行的listener进行注释

8 查看与连接V6

lsnrctl status listener_scan1

sqlplus system/ora#123@[IPV6ADDERSS]:1522/arsystem

8 清除IPV6配置:

1 关闭监听

2 关闭节点vip

3 关闭scan以及scan listener

[root@db1 bin]# ./srvctl stop scan_listener -i 1
[root@db1 bin]# ./srvctl stop scan
[root@db1 bin]# ./srvctl stop listener -n eomsdb1
[root@db1 bin]# ./srvctl stop listener -n eomsdb2
[root@db1 bin]# ./srvctl stop vip -n eomsdb1
[root@db1 bin]# ./srvctl stop vip -n eomsdb2
[root@db1 bin]# 
 

./srvctl modify network -netnum network_number -iptype ipv4

[root@orcldb1 bin]# ./oifcfg delif -global eno3/2409:8002:5A06:0120:0010:0000:0002:D000
[root@orcldb1 bin]# ./oifcfg getif

---清除VIP 在添加VIP

[root@orcldb1 bin]# ./srvctl remove vip -vip orcldb1-vip
Please confirm that you intend to remove the VIPs orcldb1-vip (y/[n]) y
[root@orcldb1 bin]# ./srvctl remove vip -vip orcldb2-vip
Please confirm that you intend to remove the VIPs orcldb2-vip (y/[n]) y

[root@orcldb1 bin]# ./srvctl add vip -node orcldb1 -netnum 1 -address 10.228.224.16/255.255.255.192/eno3
[root@orcldb1 bin]# ./srvctl add vip -node orcldb2 -netnum 1 -address 10.228.224.17/255.255.255.192/eno3
[root@orcldb1 bin]# ./srvctl modify vip  -node orcldb1 -netnum 1 -address 2409:8002:5A06:0120:0010:0000:0002:D010/116/eno3
[root@orcldb1 bin]# ./srvctl modify vip  -node orcldb2 -netnum 1 -address 2409:8002:5A06:0120:0010:0000:0002:D011/116/eno3

Modify the VIP using a VIP name that resolves to IPv4 by running the following
command as  root :
# srvctl modify vip -node node_name -address vip_name -netnum
network_number

Do this once for each node.


9. Modify the SCAN using a SCAN name that resolves to IPv4 by running the
following command:
# srvctl modify scan -scanname scan_name
Do this once for the entire cluster.

oifcfg delif -global eno3/2409:8002:5A06:0120:0010:0000:0002:D000:public  ??

实操案例总结1 

[root@orcl1 bin]# ./srvctl modify network -netnum 1 -iptype both
PRCN-3049 : Cannot change the network subnet address type to 'both' because it has only an IPv4 subnet address
[root@orcl1 bin]# ./srvctl modify scan -scanname orcl-scan
PRCS-1034 : Failed to modify Single Client Access Name orcl-scan
PRCS-1138 : invalid VIP addresses "2409:8002:5a06:120:10:0:2:d012" because the specified IP addresses are reachable


之前由于直接将节点1网卡down掉,所以地址都漂移到节点2

[root@orcl2 ~]# ifconfig -a
eno1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether c4:b8:b4:2e:b7:10  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eno2: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether c4:b8:b4:2e:b7:11  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eno3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.224.7  netmask 255.255.255.192  broadcast 192.168.224.63
        inet6 fe80::c6b8:b4ff:fe2e:b70f  prefixlen 64  scopeid 0x20<link>
        inet6 2409:8002:5a06:120:10:0:2:d007  prefixlen 116  scopeid 0x0<global>
        inet6 2409:8002:5a06:120:10:0:2:d010  prefixlen 116  scopeid 0x0<global>
        inet6 2409:8002:5a06:120:10:0:2:d011  prefixlen 116  scopeid 0x0<global>
        inet6 2409:8002:5a06:120:10:0:2:d012  prefixlen 116  scopeid 0x0<global>

        ether c4:b8:b4:2e:b7:0f  txqueuelen 1000  (Ethernet)
        RX packets 6649320  bytes 1190353795 (1.1 GiB)
        RX errors 0  dropped 811432  overruns 0  frame 0
        TX packets 4488188  bytes 1628576347 (1.5 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eno3:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.224.17  netmask 255.255.255.192  broadcast 192.168.224.63
        ether c4:b8:b4:2e:b7:0f  txqueuelen 1000  (Ethernet)

eno3:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.224.18  netmask 255.255.255.192  broadcast 192.168.224.63
        ether c4:b8:b4:2e:b7:0f  txqueuelen 1000  (Ethernet)

eno4: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether c4:b8:b4:2e:b7:0e  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens3f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.2.0.121  netmask 255.255.255.0  broadcast 10.2.0.255
        inet6 fe80::360a:98ff:fe9c:ed3d  prefixlen 64  scopeid 0x20<link>
        ether 34:0a:98:9c:ed:3d  txqueuelen 1000  (Ethernet)
        RX packets 1264679191  bytes 1648029024996 (1.4 TiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1019686917  bytes 1257102142297 (1.1 TiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xa2600000-a26fffff  

ens3f0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.17.118  netmask 255.255.224.0  broadcast 169.254.31.255
        ether 34:0a:98:9c:ed:3d  txqueuelen 1000  (Ethernet)
        device memory 0xa2600000-a26fffff  

ens3f1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 34:0a:98:9c:ed:3e  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xa2500000-a25fffff  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 79001912  bytes 164043179883 (152.7 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 79001912  bytes 164043179883 (152.7 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:2c:fa:86  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0-nic: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:54:00:2c:fa:86  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

节点2 重启网络  

service network restart

在线进行modify network模式前


[root@orcl1 bin]# ./srvctl modify scan -scanname orcl-scan
[root@orcl1 bin]# 

[grid@orcl1 ~]$ lsnrctl status listener_scan1

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 15-FEB-2023 16:57:28

Copyright (c) 1991, 2020, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                15-FEB-2023 16:55:22
Uptime                    0 days 0 hr. 2 min. 5 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19c/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/orcl1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.224.18)(PORT=1521)))
Services Summary...
Service "orcl" has 2 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
  Instance "orcl2", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 2 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
  Instance "orcl2", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@orcl1 ~]$ 

[root@orcl1 grid]# cd nbin
-bash: cd: nbin: No such file or directory
[root@orcl1 grid]# cd bin/
[root@orcl1 bin]# ./srvctl modify network -netnum network_number -iptype both
PRKO-2111 : Invalid number network_number for command line option netnum
[root@orcl1 bin]# ./srvctl modify network -netnum n^Ciptype both
[root@orcl1 bin]# ./srvctl config network
Network 1 exists
Subnet IPv4: 192.168.224.0/255.255.255.192/eno3, static
Subnet IPv6: 2409:8002:5a06:120:10:0:2:d000/116/eno3, static (inactive)
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
[root@orcl1 bin]# ./srvctl config vip -node1
PRKO-2002 : Invalid command line option: -node1
[root@orcl1 bin]# ./srvctl config vip -node orcl
PRKO-2310 : VIP does not exist on node orcl.
[root@orcl1 bin]# ./srvctl config vip -node orcl1
VIP exists: network number 1, hosting node orcl1
VIP Name: orcl1-vip
VIP IPv4 Address: 192.168.224.16
VIP IPv6 Address: 2409:8002:5a06:120:10:0:2:d010 (inactive)
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[root@orcl1 bin]# ./srvctl config vip -node orcl2
VIP exists: network number 1, hosting node orcl2
VIP Name: orcl2-vip
VIP IPv4 Address: 192.168.224.17
VIP IPv6 Address: 2409:8002:5a06:120:10:0:2:d011 (inactive)
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[root@orcl1 bin]# 

[root@orcl1 bin]# ps -ef|grep tns
root        816      2  0 16:39 ?        00:00:00 [netns]
grid      70758      1  0 16:41 ?        00:00:00 /u01/app/19c/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
grid      77489      1  0 16:43 ?        00:00:00 /u01/app/19c/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
grid      90136      1  0 16:55 ?        00:00:00 /u01/app/19c/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit
root     117440 114119  0 16:59 pts/0    00:00:00 grep --color=auto tns
[root@orcl1 bin]# lsnrctl status
bash: lsnrctl: command not found...
[root@orcl1 bin]# su - grid
Last login: Wed Feb 15 16:57:53 CST 2023
[grid@orcl1 ~]$ lsnrctl satus

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 15-FEB-2023 16:59:35

Copyright (c) 1991, 2020, Oracle.  All rights reserved.

NL-00853: undefined command "satus".  Try "help"
[grid@orcl1 ~]$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 15-FEB-2023 16:59:38

Copyright (c) 1991, 2020, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                15-FEB-2023 16:43:12
Uptime                    0 days 0 hr. 16 min. 26 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19c/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/orcl1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.224.5)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.224.16)(PORT=1521)))
Services Summary...
Service "+APX" has 1 instance(s).
  Instance "+APX1", status READY, has 1 handler(s) for this service...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ARCH" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_OGG" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA1" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA2" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA3" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_REDO" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "orcl" has 1 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@orcl1 ~]$ 


是否需要两个节点均激活 。

节点2 ;

[root@orcl2 bin]# ./srvctl config network
Network 1 exists
Subnet IPv4: 192.168.224.0/255.255.255.192/eno3, static
Subnet IPv6: 2409:8002:5a06:120:10:0:2:d000/116/eno3, static (inactive)
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
[root@orcl2 bin]# ./srvctl config vip -node orcl1
VIP exists: network number 1, hosting node orcl1
VIP Name: orcl1-vip
VIP IPv4 Address: 192.168.224.16
VIP IPv6 Address: 2409:8002:5a06:120:10:0:2:d010 (inactive)
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[root@orcl2 bin]# ./srvctl config vip -node orcl2
VIP exists: network number 1, hosting node orcl2
VIP Name: orcl2-vip
VIP IPv4 Address: 192.168.224.17
VIP IPv6 Address: 2409:8002:5a06:120:10:0:2:d011 (inactive)
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[root@orcl2 bin]# 

[grid@orcl2 ~]$ lsnrctl status listener

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 15-FEB-2023 16:56:24

Copyright (c) 1991, 2020, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                15-FEB-2023 16:50:39
Uptime                    0 days 0 hr. 5 min. 44 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19c/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/orcl2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.224.7)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.224.17)(PORT=1521)))
Services Summary...
Service "+APX" has 1 instance(s).
  Instance "+APX2", status READY, has 1 handler(s) for this service...
Service "+ASM" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_ARCH" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_OGG" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA1" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA2" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA3" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_REDO" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "orcl" has 1 instance(s).
  Instance "orcl2", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
  Instance "orcl2", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@orcl2 ~]$ 

./srvctl modify network -netnum 1 -iptype both

在线进行modify network模式后

[root@orcl1 bin]# ./srvctl config network
Network 1 exists
Subnet IPv4: 192.168.224.0/255.255.255.192/eno3, static
Subnet IPv6: 2409:8002:5a06:120:10:0:2:d000/116/eno3, static
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
[root@orcl1 bin]# ./srvctl config vip -node orcl1
VIP exists: network number 1, hosting node orcl1
VIP Name: orcl1-vip
VIP IPv4 Address: 192.168.224.16
VIP IPv6 Address: 2409:8002:5a06:120:10:0:2:d010
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[root@orcl1 bin]# ./srvctl config vip -node orcl2
VIP exists: network number 1, hosting node orcl2
VIP Name: orcl2-vip
VIP IPv4 Address: 192.168.224.17
VIP IPv6 Address: 2409:8002:5a06:120:10:0:2:d011
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 

[grid@orcl1 ~]$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 15-FEB-2023 17:03:06

Copyright (c) 1991, 2020, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                15-FEB-2023 16:43:12
Uptime                    0 days 0 hr. 19 min. 54 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19c/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/orcl1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.224.16)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=2409:8002:5a06:120:10:0:2:d005)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.224.5)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=2409:8002:5a06:120:10:0:2:d010)(PORT=1521)))
Services Summary...
Service "+APX" has 1 instance(s).
  Instance "+APX1", status READY, has 1 handler(s) for this service...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ARCH" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_OGG" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA1" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA2" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA3" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_REDO" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "orcl" has 1 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@orcl1 ~]$ lsnrctl status listener_scan1

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 15-FEB-2023 17:03:18

Copyright (c) 1991, 2020, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                15-FEB-2023 16:55:22
Uptime                    0 days 0 hr. 7 min. 55 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19c/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/orcl1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.224.18)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=2409:8002:5a06:120:10:0:2:d012)(PORT=1521)))
Services Summary...
Service "orcl" has 2 instance(s).
  Instance "orcl1", status READY, has 2 handler(s) for this service...
  Instance "orcl2", status READY, has 2 handler(s) for this service...
Service "orclXDB" has 2 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
  Instance "orcl2", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@orcl1 ~]$ 


节点2 确认
[grid@orcl2 ~]$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 15-FEB-2023 16:58:14

Copyright (c) 1991, 2020, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                15-FEB-2023 16:50:39
Uptime                    0 days 0 hr. 7 min. 34 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19c/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/orcl2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.224.17)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=2409:8002:5a06:120:10:0:2:d007)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.224.7)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=2409:8002:5a06:120:10:0:2:d011)(PORT=1521)))
Services Summary...
Service "+APX" has 1 instance(s).
  Instance "+APX2", status READY, has 1 handler(s) for this service...
Service "+ASM" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_ARCH" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_OGG" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA1" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA2" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADATA3" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_REDO" has 1 instance(s).
  Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "orcl" has 1 instance(s).
  Instance "orcl2", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
  Instance "orcl2", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@orcl2 ~]$ 


[grid@orcl2 ~]$ srvctl config network 
Network 1 exists
Subnet IPv4: 192.168.224.0/255.255.255.192/eno3, static
Subnet IPv6: 2409:8002:5a06:120:10:0:2:d000/116/eno3, static
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
[grid@orcl2 ~]$ 

GGSCI (orcl1) 6> exit
[eoms@orcl1 ggs]$ more GLOBALS 
USEIPV4
[eoms@orcl1 ggs]$ netstat -tunlp|grep 7809
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:7809            0.0.0.0:*               LISTEN      146719/./mgr        
[eoms@orcl1 ggs]$ 
 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值